Inspiration

Having been inspired by their family member's journey in life who had struggled with being deaf and using sign language all their life, a member of our team pushed forward an idea to celebrate innovative technologies that now, if implemented correctly, could bridge the gap between a person's communicative challenges by helping people learn ASL and helping deaf people communicate.

What it does

ASL-Quick-Learn is a website that allows a user to signal different ASL letters and gain feedback based on what they signal. The website generates a random letter, and the user signs that and takes a picture using a handy one-click button on the website. Immediately after they take the picture, they receive a point if their sign was correct, and the website generates a new random letter for them to signal. At anytime the user can see their current score at the bottom of the screen.

How we built it

We had an extremely well-thought out step-by-step process for our code. We started by designing a webscraping script using Python and the library BeautifulSoup. This gathered images for each sign corresponding to a letter from an online ASL dictionary. Although currently, we only have support for ASL letters, in the future we want to add words and this script lets us scale fast. Next, after gathering the data we pre-processed the dataset by utilizing Google's API, Mediapipe, to help us label 21 distinct locations (each location contains an x, y, z value) on the hand that each distinct ASL gesture creates. From there we labeled each image with the corresponding letter and 63 values that are generated from each movement to make up the composition of the model's data. Designing a basic machine learning model, we were able to predict the letter corresponding a sign to a given image. This was done using technologies including TensorFlow, Mediapipe and OpenCV. Finally we got to the web

Challenges we ran into

The most major issue was the lack of data. With over 20 potential classification classes, and barely any data to support an already difficult problem, we had a hard time developing a feasible neural network model whose performance could rival that of a human who knows ASL.

Another issue was the fact that we wanted to build something to that a user could easily use, but none of us had any front-end knowledge. We spent a significant portion of time learning and researching how to use HTML, CSS and Javascript to build a locally hosted website. This definitely paid off though, as we created a relatively clean looking website with functionality to live video stream and take pictures from that stream.

Finally, since none of us had front-end experience, we definitely did not have full-stack development experience. Therefore, connecting our website to the Machine Learning model was quite difficult. Using Python Flask we were eventually able to complete this task and therefore were able to compare images taken live to our data.

Accomplishments that we're proud of

We're definitely proud of our product for it being our first hackathon!

What we learned

Mentioned earlier, none of us had any front-end knowledge or experience at all, but we were able to create a working locally-hosted website that had a built-in video stream and the ability for the user to take images with that.

What's next for ASL-Quick-Learn

In the future we want to add compatibility for more than just ASL letters. Fortunately, the way our pipeline is designed using webscraping, this would be very easy. Simply adding any signable word to an array in our script would allow us to garner data on that word.

Built With

Share this project:

Updates