Our project was inspired by the current scenario/status of speech and hearing impaired people and how the biggest challenges faced by them are lack of social interaction, language and communication problems. We plan to reduce this divide/problem. Voice4mutes powers individuals to overcome the barrier of communication using technology.

What it does

Voice4Mutes aims to reduce the communication gap between the hearing/speech impaired and normal people.Voice4mutes has three modes. The first mode enables people to learn theoretically, the second mode allows users to participate in exercises to test their knowledge both theoretically as well as practically by identifying the hand signs via camera. In the third mode translation occurs in Real time where if you want to communicate to a hearing/speech impaired person, one can simply talk in the microphone which converts the speech to text as well as sign language so they can understand what you are trying to convey. Similarly when they are trying to communicate with you in sign language the webapp converts the sign language to text in real time for you to understand what they are trying to convey to you.

How we built it

We built our website using html5 ,css3, JavaScript and bootstrap. For authentication we used firebase. We used to host our site. To get real time translation of audio to sign language we used Google API. For our deep Learning models we used ResNetV2 for transfer learning. We used OpenCV to make predictions and have made use of flask to deploy our model.

Challenges I ran into

The Front end of the Webapp was made fluently, The major Challenges we faced were deploying the Deep Learning model which converts Hand signs to text and to deploy The DL model on a webApp it needed integration with flask which was another challenge we ran into. Initially we got some errors and then we were successfully able to deploy and integrate the DL Model.

Accomplishments that we are proud of

We at the Voice4mutes are proud to have achieved real time translation of audio to sign language as well as sign language to text.

What we learned

During the course of the Hackathon we learned to work with Flask. The most important thing we learned was time management and how to work together with a team under a deadline.

What's next for Voice4Mutes

Voice4Mutes plans to add additional sign languages like British sign language, Indian etc. We are also working on adding a machine learning model which gives information and facts about the disabled community to encourage people to learn sign language so that the communication gap is reduced between people. Voice4Mutes plans to add additional sign languages like British sign language, Indian etc. Voice4mutes aims to become the go to tool for translation in the community.

Built With

Share this project: