Inspiration

We were inspired to create an app for hearing impaired people. Over 48 million Americans (20 percent of the population) have some level of hearing loss, and this number is growing. We wanted to create a solution that enables people to hear without having to look away from who is talking.

What it does

The interface is simple. We display a video feed and live subtitles at the bottom. By keeping it simple, users can still see who is talking while being able to follow along.

How we built it

We built an iOS application and used the Google Speech API as the backend.

Challenges we ran into

At first, we wanted to create a React Native API. By creating a hybrid application, we would be able to easily deploy to both Android and iOS. We also planned on using IBM Watson's Speech to Text API. We ran into challenges streaming audio into the API's web socket, but did not want to sacrifice having "live subtitles". To overcome this challenge, we pivoted to an iOS-only application that uses Google's API instead.

Accomplishments that we're proud of

As a team, we have very little iOS or mobile development experience, so we are proud of getting an app running in such a short amount of time. We are also proud of our minimal user interface that has no learning curve.

What we learned

We learned the intricacies and nuances of mobile app development. We also learned the challenges that come with converting speech to text, especially in real time.

What's next for SubtitleApp

We will utilize IBM Watson's Tone Analyzer API to detect emotions in what people are saying, so users are not just seeing plain text.

We plan on adding a language feature so that subtitles can be translated into a language of your choice in real time. We also want to expand to Android and fine-tune the speech functionality.

Share this project:
×

Updates