Inspiration

January 22-24 2016 @ PennAppsXIII

We braved Storm Jonas to come to Philadelphia and collaborate on a project that is original, challenging, and help make the world a better place. After many hours of meetings and throwing out ideas, we found that there was limited real-time visual alert system for the hearing-impaired. Their main mode of communication is through visual means, and we decided to create a system that would integrate sound communication into visual medium. We were inspired by the fact that there is no viable commercial product available that performs similar function, and wanted to create a prototype for visual sound alert system during PennAppsXIII.

What it does

Headgear with strategically placed microphones pick up sounds cues above ambient noise and alerts the user real-time via visual aid (currently Google Cardboard). Using camera on the phone, front environment of the user is presented via screen and alerts are overlaid on top of the screen without impairing their vision. Multiple microphones pick up on the relative position of the source of sound and alerts the user with not only the content of the sound but also the position.

How we built it

We built a headgear with 3 microphones placed for maximal detection of relative position of sound source. These are all connected to Raspberry Pi, which processes the input sound and sends request for speech-to-text via houndify API. Text is then displayed on the mobile phone app, where it is overlaid on top of the image of the frontal view of the user of the real world, including information about the position of sound.

Challenges we ran into

Everything - from weather to sickness, fighting sleep deprivation, hard-to-find-documentation, speech recognition being impacted by noise from microphones, constant hardware failure (Raspberry Pi, router, multiple audio input source), and (of course) bugs.

Accomplishments that we're proud of

We have created prototype of first device/app of its kind - integration of sound into everyday visual device for the hearing-impaired.

Countless resuscitation of RaspberryPi (with no access to monitor)

Figuring out that Unity randomly crashes on audio services (after crashing despite rollback, new compile, etc.)

Being recruited as a model for the first time in my life (YoungWook)

What we learned

Comparable/similar apps include the following:

Tap Tap (https://itunes.apple.com/us/app/tap-tap/id369747181?mt=8)

Hearing Aide (https://play.google.com/store/apps/details?id=com.grey.hearingaide&hl=en)

But none of the apps have the accuracy and usability that allowed it to achieve widespread adoption by the hearing-impaired community. No app has attempted to bring the sound cues into visual reality.

We learned about the dangers and challenges that hearing-impared population can face on daily basis that most people take for granted.

What's next for HeAR - Hearing (Augmented Reality) Aid

Machine learning for sound and alert recognition - learn to recognize the important warning sounds such as sirens, honks, etc. Limited success of past apps that attempted to alert the user of warning sounds by hard-coding the sounds into app, which limits their helpfulness in unfamiliar environments where such cues are the most needed. Better design for less intrusive hardware (for both headgear and visual aid), as well as improving position detection and speech-to-text conversion through better sound processing.

Share this project:
×

Updates