One of my friends relies on sign language to communicate and he doesn't want to miss out on the experience of video calling, so I made this.

What it does

Our program is a video calling application that recognizes American Sign Language and converts it to text in real time

How I built it

We trained a machine learning model with the help of TensorflowJS, MobileNet, and KNN to recognize sign language.

Challenges I ran into

Training the model was a challenge because we had never trained our own model before, and machine learning is poorly documented online.

Accomplishments that I'm proud of

I'm proud of training my own model. That was an interesting (and painstaking) experience, as ML is not very well documented online

What I learned

I learned how to train my own model! This is my first time doing that. I also learned a tiny bit of Javascript.

What's next for Talk to the Hand

Next, we want to turn this into an app to make it more accessible. We also want to improve the accuracy of our model.

Share this project: