Notification Alert for Audio Announcement
Received Sound Spectrogram
Spectogram after Processing
Listening to Speech
While we sat brainstorming in the middle of the night, it was by coincidence that we found out that each one of us had been moved by the problems faced by a close friend or a loved one whose hearing was impaired. While it is only natural for someone to feel a certain sense of empathy for them, our individual understandings led to one direction- they want to resonate as any normal person.
What it does
We try to solve two problems that will help the hearing impaired experience the world like any other person. We have implemented a system which is analogous to Braille, enabled by the ubiquitous presence of powerful computing devices. The system enables people with hearing impairment receive public announcements and ambient sounds, like ringing of doorbell, as notifications on their smart devices. The system also enables them to disambiguate underlying intent in speech/audio, that cannot be done with normal speech-to-text transcript alone.
How we built it
We used Natural Language Processing algorithms to disambiguate user sound wave-forms and words to predict best possible intent(s) of the speaker. The algorithm was trained on datasets by Twitter feeds and movie reviews. We combined prediction results from independent processing of audio and converted text samples to achieve better results than given by stock libraries. The processing is done on the Google Cloud Platform and Firebase. The ‘noise-encoding’ algorithm uses state-of-the-art audio steganography method. It encodes binary information at 20000Hz, which is inaudible to human ear. A spectrogram analysis decodes the information and notifies the user via notifications. Decoding algorithm runs on the target device and does not require internet connectivity.
Challenges we ran into
The entire idea and algorithm of the ‘audio noise encoding’ system was done during the span of the hackathon. We implemented the entire spectrograph parsing algorithm. We also used an open-source project for the NLP. Setting this project up on GCP was an exciting experience in itself.
Accomplishments that we're proud of
We are proud that our developed technology has the potential to be adopted worldwide and transform someone’s life in a meaningful way. The technical challenges that we overcame while developing a real-time system also bolstered our confidence in our skills.
What we learned
While developing the system, we put ourselves in the shoes of the person who will use our technology. This gave us a deeper understanding of how people with hearing impairments feel interacting with others, both in public and at home.
What's next for Resonate
We believe that the technology can be formalised and can be made as a standard for all audio systems. We plan to make our prediction analysis more robust and accurate, by training our own datasets. We want to modify the underlying logic to be tailor-made to our use-case.