There are about 70 million deaf people who use sign language as their first language or mother tongue. It is also the first language and mother tongue to many hearing people and some deafblind people. However, there is a radical gap in communication between the deaf and other members of society -- the vast majority of citizens do not know how to speak sign language. As a result, we sought to use the Leap Motion to create a seamless, intuitive manner of sign language translation.
What it does
ASLSpeak utilizes the Leap Motion to gather a variety of data on the motions of the hands of a sign language user, and feeds it into a machine learning algorithm to classify which letter or word was gesticulated. After identifying what was signed, ASLSpeak utilizes a laptop's text to speech to say the translation.
How I built it
We used python to collect training data on sign language users, and inputted it into a neural network classification algorithm. After we successfully trained the classifier, we wrote a script to take user input (through sign language via the leap motion) and automatically speak it.
Challenges I ran into
Our neural network accounted for over 350 different features of a user's hand, so it was often a bit slow to be used in everyday normal conversation. Additionally, we had only 24 hours to build a script to gather training data, create training data, create the neural network classifier, and write the script to translate user input. With more time, we could generate better training data and increase the speed, which would do wonders for its usability as a tool in everyday speech.
What I learned
We had little to no machine learning experience coming into the hackathon, and definitely feel like we've learned a lot more about this interesting technology.
What's next for ASLSpeak
Development of ASLSpeak does not stop with the end of DubHacks. We plan to add more and higher-quality training data, and also increase the accuracy of our machine learning algorithm.