Inspiration

One of our team members struggles with ADHD and focusing in open, noisy environments. As a team, we decided to build an application that would be able to help users with focusing issues read and listen to verbal presentations while filtering out the noise and audible distractions. This technology helps not just those with ADHD but also autism and other learning disabilities.

What it does

AFAR (Assisted Focus Audio Reviewer) is a web application that transcribes in real-time audio inputs from a microphone attached to a speaker so that the user can read follow along with the speaker while wearing noise-canceling headphones to filter out the noise distractions. This allows the user to be either in a separate, distraction-free room or in the same room as the speaker but able to focus on the lecture.

How we built it

We used the rev.ai API speech-to-text service to record the microphone input of the speaker. We use the Streaming Speech-To-Text API to make API calls through sockets and displaying the transcript of the transcribed text on our front-end. The transcript works as note-taking for the class with the implemented annotation features of a text editor. We use Quill which is a free, open-source WYSIWYG editor built for web apps. To facilitate better learning, the students can also have the lecture content along with their notes read out to them in a generic and easy to understand voice. We implemented this by using the SpeechSynthesis API. Listening, transcribing, and note-taking at their own pace provides a simpler and enriching learning experience for everyone, regardless of their abilities.

Challenges we ran into

A feature we wanted to add to the product was a real-time audio read-out using a raspberry pi so that the user could listen to the speaker using comfortable and familiar text-to-speech voices. Implementing a real-time audio read-out proved to be difficult given the time constraint.

Accomplishments that we're proud of

The UI of the website has been designed with distracted students in mind; we used high contrast visuals to draw the users attention to the transcription. We also timestamp the transcription so the user can record and access the speech later and pinpoint specific information. Overall, we are proud of the features of the application that keep in mind those with disabilities so that every person is able to participate, learn, and contribute inside the classroom and out.

What we learned

One of the biggest things we learned was how to implement the real-time rev API. It was a primary goal to have this application work in real-time so that the users are always included in speeches, lectures, and presentations. Not having real-time transcription would exclude the user from focusing and participating during the events.

What's next for AFAR

Our next steps include real-time audio read out from a Raspberry Pi so that the user can be mobile while listening to the speaker. This would be a useful feature for those individuals that need to fidget or pace to focus as well as other users that need to step out from the lecture to use the bathroom, make a call, etc.

Share this project:

Updates