Inspiration

In the current society, we notice many people with speech deficits are not very well accommodated. We want to create some software that utilizes technology into accommodation for disabilities.

What it does

transformation between sign languages and written languages ViSign aims achieve the transformation between sign languages and written languages. It can analyze hands gestures, recognize its meaning in the ASL and translates it into written English letters. It also aims to read the text input and visualize it into ASL. transformation between sign languages and written languages Moreover, ViSign can not only accomplish the translation from motion to text, but it can also translate from text to voice, so those that are hard of hearing can type and word with sign language and also work on proper pronunciation.

How we built it

We used LeapMotion hardware to detect hand gestures. The gestures were then used as training data for a Tensorflow neural net. We also used Google text to voice API for text to audio.

Challenges we ran into

One of the main challenges we ran into was training the neural net correctly. One we are not experts in sign language so our gestures may have been partially incorrect and inconsistent. In addition, correctly training the neural net to perform a gradient error descent without overfitting the data required many trial and error with the net configurations. One last challenge we ran into was creating a proper way to output gestures to text. We realize that our AI is not perfect, however, we still wish to output the best text possible, so output correct text we made a minimum number of consecutive recognitions before it outputs a character. Besides, the post neural network processing is also a convoluted problem. We admit that our filter is not perfect at this moment in the way that it sometimes could not eliminate the false detection from the stream effectively. However, for a 24-hour hackathon, we are proud of what we've come through, and we believe with enough time and effort taken, this product can be very finely tuned.

Accomplishments that we're proud of

We are proud of creating the neural net number, and also creating a product demo that could benefit society as well.

What we learned

Neural Nets and AI are very difficult.

What's next for ViSign

Big things are next for ViSign! The ultimate goal would implementation a Long-Short Term Neural Net that could recognize and translation all sign language gestures, and not just the alphabet. This would be a great tool to help people learn sign language as well as communicate quickly to those who do not know sign language. Imagine using your phone to visually record someone signing a message, then that message could be output either with text or audio to facilitate efficient communication between parties! Other implementations could be used for video chats, day to day sign to text input, and gesture pronunciation practice. We envision a world we people hard of hearing or who have recently become deaf, can use our software to practice signing in private and learn how to correctly pronounce words using sign to audio! We believe ViSign can help bridge the communication gap between those who know sign language and those who do not.

Share this project:

Updates