The trending topic of invisible disabilities and wanting technology to a cater to a more marginalized group of people has inspired us to create Dhwani. Deafness is an invisible disability that is common but often overlooked and is easier to start with.

The app is primarily for the heard of hearing and deaf people. It has functionalities including:

  1. Name calling detection with conversation history
  2. Bell, Fire Alarm, and other high pitched noise detection
  3. Automatic to-do list

Our goal is to allow the deaf and hard-of-hearing to integrate naturally into everyday society; allow more personal interactions with the deaf and hard-of-hearing than ever before by converting speech directed towards the deaf/HOH into mobile vibrations, notifications, to-do lists, etc.

Potential integration with hearing aids that connect to our smartphones and smartwatches to actually make the hearing aids 'smart' is our future vision.

We used the WebKitSpeechRecognition toolkit that lets our program to learn the user's name by getting the user to say it five times. It is built using Node.JS, JavaScript and HTML. The screen mockups are designed using PhotoShop.

We initially thought of building a native app, but realized that due to the short length of the hackathon, we would be better off making a web app. We still ran into some challenges while coding the app like unnecessary data on the screens and other toolkit implementation problems, but working together and not holding onto solving the smallest issues, we were able to successfully build a working product.

15% Americans have detectable partial hearing loss and a very good friend of (a few of) ours, Rishi, is partially deaf. We recognized this as major hinderance to a functioning society that is inclusive of everyone, so to cater to all these and our good friend Rishi, we worked on the project Dhwani and made sure it is easy to use for everyone.

We're proud of how far we've gotten and the way everyone in the team has collaborated to bring together a product that does what it's intended and has functionality to very high extent.

What's next:

  1. Public announcement detection (airports, train stations, etc)
  2. Adding a translation package for multilingual interactions -- allows the speakers and users to communicate across language boundaries.
  3. Potential integration with hearing aids that connect to our smartphones and smartwatches to actually make the hearing aids 'smart' is our future vision.

Built With

Share this project:

Updates