Inspiration

HEAR.I.B stands for Hearing for Everyone: Augmented Real-time Interactive Beamforming.

We have personal friends that have low-hearing, thus we are familiar with the struggles they have especially in crowded environments. Combining different technologies such as gaze tracking, audio localization, and voice separation, we saw the potential to help solve this issue. We envision this technology to be utilized by hearing-impaired individuals or individuals who are frequently in a situation where they need to hear a specific speaker in a loud environment.

How we built it

We divided the product into 3 main branches: Gaze tracking Using Python OpenCV to track eye movement Resolves direction vector Beamforming Using geometric analysis combined on an array of 4 microphones with direction vector Overlays different microphone inputs based on calculated time delays for spatial filtering Audio separation Filters the audio for vocal signatures, leaving a crisp human voice

Challenges we ran into

There are existing designs that resemble our idea to isolate, separate, and enhance sound. However, our idea divided from the rest as we looked to implement it in real-time, which we struggled to find and utilize existing libraries that lacked the complexity that we looked for.

Along with the software component, we tried to create our own wearable piece of hardware for accessibility using Raspberry Pi and Arduino. Unfortunately, we could not find the necessary components that were needed. Thus, we had to adapt, improvise and overcome by working with a webcam and built-in microphone on our laptops. Regardless, our proof of concept shows that our execution design is viable.

Accomplishments that we're proud of

We were very close to not having a finished product, but we stuck together as a team and showed versatility by pivoting the idea to fit the time constraints and challenges that we faced. We are proud of the idea because it has great potential even beyond the scope of the Hackathon, and are hoping to include everyone in the conversation.

What we learned

In the last day, we have learned how to integrate hardware into our software interfaces, as well as use existing API to speed up our development process. By focusing on the strengths of our individual members, we focused on different aspects of the project, ranging from data processing to hard integration.

What's next for Smart Hearing Aid HEAR.I.B.

In the future, we hope to implement the following features to improve and further utilize our system: The compact form factor for a viable wearable solution Automatic volume level adjustments based on the environment for hearing protection (i.e. for veterans or used at concerts) Create voice ‘fingerprint’ to recognizing specific voices, which can be used for disaster relief when the system is mounted to an unmanned ground vehicle that can scan the area and look for specific people Transcription provided through Speech-to-Text for further assistance

Built With

Share this project:

Updates