Inspiration

After testing the Leap Motion we decided that it was time to develop a project in order to impact the minority. We want to be able to help visual impaired communicate with other being able translate sign language to voice using Watson API. Additionally we added some features such as English text translation using the oxford dictionary.

What it does

It's primary function is converting sign language to voice text, additionally it translates the word to English and provided synonymous.

How we built it

We recorded 200 times each word we wanted to translate (5 words in total). We translate that into space coordinates, then we normalize our data, we uploaded it to Watson and Watson sends a response of the coordinate translation into one of the words. Then using text to speech we send the word and got an audio file which was played on the browser using react. Additionally the consult an external API (Oxford Dictionary) in order to get a translation and some synonymous.

Challenges we ran into

Training the model Communicating with the API Integration Problems

Accomplishments that we're proud of

Gesture to text recognition Text to speech file conversion

What we learned

We learn to use react, use the Watson services and about CORS policies.

What's next for 100 - Easy Communication

Move the project to a more accurate system such a Kinect system and train the system to recognize more words and even on a context.

Built With

Share this project:

Updates