Inspiration
Can you imagine being in a classroom and struggling to follow along because you can't comprehend the slides? Imagine having a group conversation and struggling to comprehend and differentiate between who's speaking what. This is what people with learning disabilities face daily, especially those with Auditory Processing Disorder or Language Processing Disorder. We personally know a relative who has experienced such disorders and we wanted to build something in the Accessibility space because there no effective tools to help solve this problem.
What it does
AudioAssist recent breakthroughs in machine learning to help people with hearing disabilities. There are three core features. Our users simply open the app and hold it in front of them when they are in a group setting. Then they can use the following features.
- Speaker Differentiation: App differentiates who's speaking around them and accurately labels them with a speech label and speaker label (Speaker 1, speaker 2). The user can easily follow-up with who's speaking what.
- Noise Filtering: AudioAssist draws in the audio of the surroundings and applies machine learning algorithms such as noise filtering to reduce ambient and extraneous noise and returns a clean audio stream. This allows users to hear much more clearly without any trouble.
- Real-time Chat UI: As people are talking in a group, our app adds in speech bubbles with whatever that has just been said. The user can simply scroll through and read the live conversation as it is happening.
Lastly, the user can save the conversation so that they can look back at it when in need. These features are incredibly helpful to those with hearing disabilities.
How we built it
The app was built on React Native to make it cross platform between iOS and Android. For the noise filtering, we used an algorithm called Spectral Gating. For the speaker differentiation, we used the UIS-RNN algorithm. The app is hosted on Azure, and uses Azure Blob storage to save the conversations.
Challenges we ran into
This was an ambitious project for a hackathon. There were many features and they all were difficult to build. React Native had many bugs and each build took a long time because we all had limited experience with it. However, we were able to successfully build what we envisioned even though it's in an initial MVP stage.
Accomplishments that we're proud of
We realized that difficulty is where the opportunity is at. Working on hard problems creates more value than working on easy problem.
What we learned
We learned that React Native isn't always the best solution. We could have used a simpler framework like building it on Swift.
What's next for AudioAssist
We had a summarize feature envisioned, however, due to time, it wasn't built. It would essentially use ML algorithms to summarize hard to comprehend phrases and sentences so that the user can understand easier. Building this would be our next step and seems very useful.
Log in or sign up for Devpost to join the conversation.