Inspiration

Since many schools and other institutions have gone virtual since the Covid-19 pandemic started, people with speaking disabilities are facing difficulties while communicating with their peers.

What it does

Talking Hands is an app that allows people with speaking disabilities to communicate virtually using American Sign Language. The app translates American Sign Language into English speech through the video feed.

How we built it

We used Microsoft Azure's Custom Vision AI platform to train the model and produce the English equivalent of the ASL gestures. We then imported the TensorFlow model to into the desktop application. The desktop application is built using OpenCV where the input is taken via the user's webcam. After the correct English equivalent is produced by the app, Google Cloud text-to-speech service reads the words out loud.

Challenges we ran into

Finding the correct data set and training the image classifier was one of the biggest challenges. Furthermore, we had some challenges with the implementation of the Azure ML model into the desktop application as we had to learn and implement OpenCV and TensorFlow.

Accomplishments that we are proud of

We are proud of the fact that we were able to create an app which is solving a major world problem through the use of Machine Learning and Data Science.

What we learned

We got hands-on experience with many new technologies including Microsoft Azure, OpenCv, TensorFlow, Google Cloud, and PyQt.

What's next for Talking Hands

Since the main goal of the app is to aid people in communicating virtually, we will add more functionalities to the app to integrate it with video conferencing apps including Zoom, Google Meet, and Microsoft Teams.

Demo Video

Link to the project demo video: https://www.youtube.com/watch?v=7A4wHdOUinw

Built With

Share this project:

Updates