ERRATUM: we weren't able to pitch because we forgot to submit the Google Form. So thats a big fat L. Nevertheless, we still think this is a pretty cool project so enjoy.

Inspiration

One of our team members has an older family relative with hearing impairments and found through his interactions with her how difficult some simple tasks and conversations could be due to the fact that she couldn't hear well. Though she has a hearing aid, she often finds it uncomfortable to wear and would prefer to learn a better way to facilitate communication. Upon doing further research, we found that around 466 million people worldwide have disabling hearing loss, and 34 million are children (according to the WHO). Despite this large number, only about 70 million people around the world use sign language to communicate (according to the World Federation of the Deaf). Given that sign language is the primary means for people with hearing loss or deafness to communicate on a day-to-day basis, we wanted to develop a tool that would help people (both those with and without hearing loss) learn and practice American Sign Language.

What it does

A user can select from 27 ASL hand signs (the letters in the alphabet and space) and imitate the corresponding image on-screen in front of their webcam. The web app will read the image from the webcam and indicate whether the user's hand sign matches the chosen hand sign. Additional insight is provided through a list of possible hand signs that the user's hand sign could be misidentified as. Ultimately, Sign Lingo helps bridge the gap of communication that is experienced daily by those with mild-to-severe hearing loss by serving as an educational tool to learn ASL, hand sign by hand sign.

How we built it

The model is being trained through the use of Azure's Custom Vision AI. Through this model, we are able to export it as a Docker Container which makes use of Tensorflow. Thus, our React App mainly communicates with this Tensorflow Application in order to parse the images using the backend.

Challenges we ran into

One of the challenges we ran into was training the model. All of us didn't have prior experience with the use of Tensorflow and other Machine Learning Libraries and we all wanted to explore about Machine Learning in a little span of time.

Accomplishments that I'm proud of

Nevertheless, we're ultimately proud of being able to build such an idea. We hope that through ideas like these, we will be able to make the world a better place one step at a time.

What we learned

CHECK THE SLACK NEXT TIME AND REMEMBER TO SUBMIT THE GOOGLE FORMS SFJSAOJFAOI;FJA;OSDFJAS;JDFASDJDFAO;DFJA;DFJAS;JDASO;JFAS;ODJADS;DFNA;DFNAS;FHALEHFALUFHALIUHFA;HFA;FEHAEFUHALEUFHD

Through today's event, we mainly explored the horizons on what we could do through the use of Machine Learning. We also learned how to implement Tensorflow Applications in a Virtual Private Server setting while also integrating the use of a system webcam with a web application.

What's next for Sign Lingo

The immediate next step we want to take is to refine the app's webcam reader and ensure higher accuracy when reading users' hand signs. This way, we will give more precise and correct feedback to users wishing to learn and practice ASL. Eventually, a mobile app could be developed, increasing the ease with which users can access the tool and learn on the spot. Furthermore, we could aim to make Sign Lingo capable of not only reading individual characters but also entire strings that can be formed into words and sentences.

Built With

Share this project:
×

Updates