Inspiration

Many of us know someone who is hard of hearing, if not completely deaf. Though not as dibilitating as other ailments, being hard of hearing hinders a person's ability to do certain basic tasks that many of us take for granted. Listen Here can offer a chance for people who struggle to hear to be able to share in a sense that can be useful, if not life-saving.

What it does

Listen Hear is an app meant to offer basic audio cues to people who are hard of hearing in visual form. Many salient noises: a ambulance siren, a car honking, a baby crying; are all used by people to provide additional information about an important situation that requires attention. Those who are hard of hearing do not have these cues in life, and must rely more heavily on other senses and cues in the world. Listen Hear attempts to bring the gap in providing some insight to these people about the cues they would be otherwise missing.

Tools already exist to process human language, but non-spoken audio is an untapped market for many applications. Listen Hear hopes to use peak amplitude sinusoidal mapping, in additional to more traditional machine learning techniques to accurately detect important tones and noises, and label them correctly. With this growing source of information, we can provide assistance to people who are hard of hearing, as well as tap into a new source of research and data mining.

How we built it

The main processing algorithm used to detect sounds is based on a similar algorithm to Shazaam's algorithm. It maps peaks in an audio file, and can compare an inputted sound clip against a corpus of pre-defined audio files.

We used Android Studio to create on of our front ends for our application.

We also built a web-app based front end which utilizes Mozilla's notification API.

Challenges we ran into

  • Android Studio is never easy, but we handled it well despite that
  • Alexa's developer capabilities did not offer any feasible way to offer transcription functionality, so we had to scrap that idea
  • Connecting AWS's mobile push proved to be tricky, since we had generated more keys than we originally thought. After this problem's roots were discovered, things went relatively smoothly
  • Using Firebase's Firebase Cloud Messaging to push notifications to a web app was much more challenging than doing it through Android Studio (which has built-in integration with Firebase). We found a workaround, but it would have been nice to get notifications through desktop

Accomplishments that we're proud of

We were thrilled when we got notifications working on both android and on desktop. We are also proud of successfully implementing the python library we found for a different application than it was originally intended to be used for.

What we learned

We learned how to connect Amazon Web Services to Android applications and web applications through Firebase. We also learned how to utilize an open source python library named dejavu (which handles sound matching). We also got a chance to dabble a little in Alexa programming (even though we ended up not being able to use it for our purposes).

What's next for ListenHere

ListenHere has many benefits for people who are hard of hearing, and, theoretically, anyone hoping to monitor a home or car remotely. There are also many opportunities for the technology to expand via Machine Learning to be able to infer what noises could be within a probability. Of course, all these opportunities offer a new area for research and development for professionals and hobbyists alike.

We would like to see ListenHere, specifically, expand its capabilities to include language processing, so the app could be used more frequently, and globally, thereby improving its performance. Improvements to performance include processing time, result accuracy, and better data management, to name a few.

Share this project:

Updates