Currently, about 600,000 people in the United States have some form of hearing impairment. Through personal experiences, we understand the guidance necessary to communicate with a person through ASL. Our software eliminates this and promotes a more connected community - one with a lower barrier entry for sign language users.

Our web-based project detects signs using the live feed from the camera and features like autocorrect and autocomplete reduce the communication time so that the focus is more on communication rather than the modes. Furthermore, the Learn feature enables users to explore and improve their sign language skills in a fun and engaging way. Because of limited time and computing power, we chose to train an ML model on ASL, one of the most popular sign languages - but the extrapolation to other sign languages is easily achievable.

With an extrapolated model, this could be a huge step towards bridging the chasm between the worlds of sign and spoken languages.

Built With

Share this project:

Updates