We have two deaf cousins, and recently watched The Sound of Metal, which gives us a lot of insight into the troubles that deaf/hearing-impaired people face in society. We wanted to come up with a way for anyone to improve/learn ASL for communicating with deaf/hearing-impaired individuals to bridge the gap and reduce miscommunication that may arise in these scenarios, which can be detrimental.

The experince leds us to build ASLearn, a tool designed to teach and practice American Sign Language (ASL) to bridge communication gaps and connect with the people around us.

ASLearn was built using HTML, CSS, and JavaScript using the React framework. We also utilized Python to train our own custom ASL dataset and develop a model capable of recognizing American Sign Language gestures.

During the development of ASLearn, we faced many challenges. Coming into NexHacks, our team was inexperienced, and this environment was new to us. From the struggles of creating our own datasets to designing and building the user interface. Furthermore, we spent more time debugging issues and testing cases, ensuring everything works as we intended.

We are proud of the entire development of ASLearn. During the development process, we got to learn American Sign Language and test what we learned using our product. Furthermore, we were able to learn how development works from debugging to deployment at the final stages.

Aside from the new tech stack we learned, something far more valuable was engraved. Often, failure and frustration are a part of the learning process. You learn rather to embrace the failures and persevere, making it all worth it in the end.

We believe ASLearn has potential. In the future, we plan to add a facial tracker in order to track expression, as it is a huge part of American Sign Language. Furthermore, we plan to implement quick signs for words such as love or eat. These are common signs for these words. The time is now.

Built With

Share this project:

Updates