The inspiration for this project came from Bilal's linguistics class, where his professor explained that Deaf people are left out of the conversation because hearing people do not know sign language well enough to communicate with them, leading to both sides being unable to connect with one another.

That got us thinking - what if we could create a 24/7 translator, bridge that faulty connection, and make a two way communication?

SnapSign takes English voice input and translates every word into the Sign Language alphabet right in the spectacles for hearing users to sign. In return, Deaf users can also sign to the spectacles and the Hearing users will be told what letter it is closest to. This device allows hearing individuals who do not know sign language to communicate with the Deaf community.

We used LensStudio and Javascript to create our American Sign Language to English translator. Our project leverages OpenAI’s API to predict the most probable sentence based on input audio. We also made use of the Spectacles HandGestures, StoreGestures, and VoiceML Snap native assets to recognize ASL signs and transcribe audio into discrete text units that were translated into ASL. Our team also used Google Gemini and Cursor to help write and debug the code for the project.

The majority of our team had never worked with LensStudios and the Spectacles before, so this led to a learning curve especially at the beginning of the project. Some of the biggest difficulties came from understanding Snap-specific conventions, including code organization and how to best use the built-in functionalities of LensStudios. Due to the time constraints, although we created a functional translator for ASL and English in both directions, the applications are not yet integrated together and the ASL gesture recognition plugin only prints likely outputs to console without 100% accuracy.

We’re proud of the machine learning utilized to predict the sentence, recorrect it to the most probable sentence, and translate it into signable ASL gestures. We are also proud of our hand tracking mechanism that is able to recognize what ASL gesture is being signed. We are very proud of the high accuracy of the translation of English to ASL program, which uses NLP to correct itself and create legible sentences every time even when audio is unclear.

Above all, we are proud that this software will allow the hearing community to speak to the Deaf community in an intelligent and understandable manner, bridging two communities separated by biology with technology.

Built With

Share this project:

Updates