According to research at Gallaudet University there are approximately 1,000,000 people in the U.S. over the age of 5, who are functionally deaf. Globally, there is a stigma associated with the deaf community and how they are seen as “disabled.” Despite the aural difference, sign language has been found to be processed the same way in the brain as spoken languages. These languages are identical in terms of comprehension, which suggests that the current segregation between "normal" people and deaf people is unjustified. We saw how Duolingo is able to attract a large user base by the way it makes new languages learning easy and exposes people to cultures from other countries; we were disappointed that a similar platform for American Sign Language (ASL) was not as easily accessible. We hope that by making this project, people are more accessible to the deaf community, such that deaf people feel more welcome in our society.

What it does

Our web app stands as a platform to help people learn ASL through practice exercises and repetition. We have an online login and dashboard page to help the user navigate to the many ASL lessons on our database. Users can then complete the lessons, which consist of various problems including matching hand motions depicted in images, matching signs with their meanings, and recalling the symbols. All of the exercises that require the user to recreate the ASL motions are classified and checked for proper form by a machine learning program.

How we built it

The website scripts are written in Typescript for Angular. We utilized Firebase’s auth API to facilitate secure logins, and stored user progress and lesson data in a Firebase database. We built our templates and static files with HTML and CSS, with help from the Materialize CSS for formatting. Our model for gesture recognition was developed using Python 2.7 with TensorFlow and trained on over 160,000 images from both publicly available (link) and private photographs. The model was trained on an AWS EC2 compute instance and is hosted for predictions using Flask.

Challenges we ran into

While we were able to find a good dataset online we decided to create additional data to provide a more diverse range of example images. In addition, we ran into various problems in trying implement aspects of the Firebase API. Lastly, we were originally hoping to host our TensorFlow model online via AWS or GCP, but were deterred by the confusing official documentation and the difficulty to create these solutions. Much of difficulty stemmed from the the need of Tensorflow and other ML libraries which were not readily available in the lambda functions we were looking into.

Accomplishments that we're proud of

For several of our members, this the first time developing a full project for submission at a hackathon. Furthermore, the complexity o this project really pushed our teamwork, building our trust in each other, and testing our problem solving abilities.

What we learned

As the creators of an ASL-teaching program, we would like to think that we learned some ASL. In addition, we all got insight in machine learning and web development. We gained a better understanding of how our datasets can affect our model's accuracy, and probability of overfitting, and the importance of having diverse data points. In therms of web development, we learned more about the different technologies available to create websites, from server-based solutions such as Flask, to compiled code such as Typescript, and the pure html, css, and js code.

What's next for ASLearn

We hope to continue working on this project and add more features. Ideas currently include: adding additional vocabulary words, more diverse exercises, increased engagement, and visual cues for ease of access.

Share this project: