heAR was born out of our team's desire to create an AR application that would positively impact the lives of hearing impaired individuals.

What it does:

heAR facilitates conversation between a hearing impaired individual and individuals with normal hearing using speech to text translation and American Sign Language to text translation. The hearing impaired individual will wear the Magic Leap 1 headset and point their controller towards the person who is speaking and push down the trigger button on the controller for the device to begin translating and displaying text. A speech bubble with the text will appear next to the person who is speaking, so that the hearing impaired individual may participate in a verbal conversation with others. In addition, messages will be recorded under a message history option.

How we built it:

We use Magic Leap SDK for Unity and also Lumin SDK to build up the AR experience, including space scanning, object overlaying for our speech bubbles and ray-casting for human recognition.

We used IBM Watson API and Watson's Unity SDK for speech to text recognition, and a custom CNN for gesture / sign language recognition.

We use keras and dataset from kaggle to train a VGG network and host it on Google Cloud Services using Flask, which provides Restful API endpoint to communicate with magic leap.

We use AfterEffect, Illustrator, C4D, and Blender to build custom assets and animation.

Challenges we ran into

  • For most of us, it was our first time working on an VR/AR project.
  • Cycle of learning then implementing throughout the 2.5 days.
  • Networking in Unity is very painful and challenging.
  • Magic Leap SDK setup and installation.
  • Magic Leap device integration.
  • Integrating custom AI solution in AR project.
  • Having to adapt to new technology that we had never seen/used before.

Accomplishments that we're proud of:

  • We were able to produce a fully functional prototype in two and a half days and integrate it with the Magic Leap system.
  • We had a great team dynamic and worked well to balance out our strengths and weaknesses.
  • Most importantly, we had the opportunity to learn from each other and the mentors to take away some valuable skills from the Reality Virtually Hackathon.

What we learned:

  • For most of us, it was our first time working on an VR/AR project, so we worked together to build a working MVP.
  • A lot of technical skills including Unity, Magic Leap, IBM Watson, GCP and Neural Network.

What's next for heAR:

  • We aim to implement facial recognition into the system to detect when a human face appears in the frame.
  • Figure out and implement audio location automatically (auto-place bubbles).
  • Make great improvements to the UI/aesthetic for the most pleasant/non-intrusive experience possible.
  • Continue working to build our MVP into a product.
Share this project: