Inspiration

We wanted to create a webapp that will help people learn American Sign Language.

What it does

SignLingo starts by giving the user a phrase to sign. Using the user's webcam, gets the input and decides if the user signed the phrase correctly. If the user signed it correctly, goes on to the next phrase. If the user signed the phrase incorrectly, displays the correct signing video of the word.

How we built it

We started by downloading and preprocessing a word to ASL video dataset. We used OpenCV to process the frames from the images and compare the user's input video's frames to the actual signing of the word. We used mediapipe to detect the hand movements and tkinter to build the front-end.

Challenges we ran into

We definitely had a lot of challenges, from downloading compatible packages, incorporating models, and creating a working front-end to display our model.

Accomplishments that we're proud of

We are so proud that we actually managed to build and submit something. We couldn't build what we had in mind when we started, but we have a working demo which can serve as the first step towards the goal of this project. We had times where we thought we weren't going to be able to submit anything at all, but we pushed through and now are proud that we didn't give up and have a working template.

What we learned

While working on our project, we learned a lot of things, ranging from ASL grammar to how to incorporate different models to fit our needs.

What's next for SignLingo

Right now, SignLingo is far away from what we imagined, so the next step would definitely be to take it to the level we first imagined. This will include making our model be able to detect more phrases to a greater accuracy, and improving the design.

Built With

+ 3 more
Share this project:

Updates