We wanted to create a device that ease the life of people who have disabilities and with AR becoming mainstream it would only be proper to create it.

What it does

Our AR Headset converts speech to text and then displays it realtime on the monitor to allow the user to read what the other person is telling them making it easier for the first user as he longer has to read lips to communicate with other people

How we built it

We used IBM Watson API in order to convert speech to text

Challenges we ran into

We have attempted to setup our system using the Microsoft's Cortana and the available API but after struggling to get the libraries ti work we had to resort to using an alternative method

Accomplishments that we're proud of

Being able to use the IBM Watson and unity to create a working prototype using the Kinect as the Web Camera and the Oculus rift as the headset thus creating an AR headset

What we learned

What's next for Hear Again

We want to make the UI better, improve the speed to text recognition and transfer our project over to the Microsoft Holo Lens for the most nonintrusive experience.

Built With

Share this project: