Inspiration

Oftentimes, deaf people can have trouble communicating to the world, and an automatic translation system could make it much easier.

What it does

It currently takes in a set of a few words, uses machine learning and data comprising instances of those words to identify when those words are being used, and then strings those words together into sentences.

How we built it

We built it using Teachable Machine and Javascript, putting in data for different hand signs and then training a machine learning model to recognize it. We then embedded it into a website, and wrote code in Javascript to extract the image recognition and string the images that are recognized into words.

Challenges we ran into

Challenges we ran into involved finding enough data to accurately identify hand signs, and also being able to alter the threshold for how sure the model should have been before it altered a sign as a word.

Accomplishments that we're proud of

We're proud of having put together a model that can string together simple sentences in sign language, and that could be extended into something very useful.

What we learned

We learned more about machine-learning, and how we could use new computing technology to help people.

What's next for Untitled

Our next steps would be to expand the model to include many more words, being able to identify simple strings, and then making it more available to everyone.

Built With

Share this project:

Updates