iHear VR Headset
Web App for modelling/testing purposes
We wanted to make a hack for the greater good.
What it does
Changes the lives of deaf people everywhere by providing them with an AR platform that provides sound localization along with an intuitive and easy-to-use Sign-Language to Speech service.
How we built it
AR Platform - Unity running on Google Cardboard on Android.
Sign-Language to Speech:
- Leap Motion to identify the moving hand/fingers.
- Neural-Network created and trained to identify certain sign-language gestures, which ran on Azure.
- Arduino 101 with 3 sound-detectors that obtain information
- Spark Core (now known as Particle Photon) used to process sound-information and to pass on conclusions to our Linode back-end, which has a socket.io connection to the AR platform.
Challenges we ran into
- Literally everything.
Accomplishments that I'm proud of
- Having completed a proof-of-concept version of an idea that was very ambitious.
What I learned
- Using Leap-Motion, training Neural-Networks for sign-language gestures, and using Arduino and Spark Core for sound detection and analysis.
What's next for iHear
- Speech-To-Text Implementation (ran out of time).
- Better sign-language recognition.