Inspiration

During brainstorming, we realized how vital each of our senses were to communication and the ability to do other everyday tasks. From this, we decided to focus on the deaf, and coming up with a device that would allow the deaf to not only comprehend text being spoken to them, but also to potentially locate the direction of voices and noises.

What it does

Our device allows the deaf to comprehend spoken language using a combination of Microsoft Bot Service and the Google Cloud API. Further, using facial recognition, there is the added ability of using facial recognition to determine the location of their face, and potential use speech bubbles to make the interaction seem more intuitive.

We built a Microsoft HoloLens app that will let the wearer see subtitled transcriptions of sentences being spoken to them. For people with learning disabilities, attention deficits, or autism, closed captions help increase concentration and help with comprehension. Users who are learning English as a second language benefit from closed captions to be able to follow the speech more easily.

How we built it

The Microsoft Hololens is integral to the Deafpost experience, as it allows text to be overlaid directly on top of the user's vision. This allows the user to not only focus on the speaker and the nuances of body language and expression, but also be able to easily read the captions that are generated by Deafpost. Unity and Visual Studio were used to develop Deafpost, with code primarily written in C#. The Microsoft Bot Framework + LUIS and Microsoft Cognitive Services (Speech-to-Text API) were used to parse the speech in order to display captions for the user.

Challenges we ran into

Learning how to program and use the Hololens from scratch was a challenge in the early stages of the hackathon. No one in our team had any experience developing for augmented/mixed reality, C#, or Universal Windows Platform.

Accomplishments that we're proud of

Finishing within the time frame while learning a ton about the Hololens and Azure Web Apps!

What's next for Deaf Post

We want to use the Hololens' 4 microphones to be able to determine where audio is coming from to provide an indication to the user. This would primarily help deaf users to be able to face whoever is talking to them for a more intuitive and easy conversation. Additionally, we want to be able to convert sign language to speech using the Hololens' 4 cameras to increase the speed at which deaf people can transmit their ideas to others!

Built With

Share this project:
×

Updates