Inspiration

When we want to learn a new language, we can just hop on to duolingo and start a course. What we don’t realize is that a key aspect of learning a language is practice and actual use of that language. The people learning sign language cannot do this, as interaction in sign language is far more rare, creating a steep and challenging learning curve. Existing digital platforms for learning sign language lacking in interaction and sophistication, providing only simple multiple choice quizzes. Through this obsolete method, students only learn how to recognize sign language, but not how to communicate using it. We aim to solve this issue by using machine learning and computer vision to create an interactive platform for learning sign language, and breaking barriers in communication with the hard-of-hearing and deaf community.

What it does

Our project is a web app that uses machine learning and camera vision to automatically detect sign language gestures for real time scoring and feedback, and provides interactive quizzes, where students can answer a question by making the appropriate sign language gesture. We have multiple levels of difficulty, as well as multiple types of interactive questions. Students can also get hints and take help from our student reference sheet. Lastly, we have a scoring and XP system for each level, to gamify the system of learning sign language.

How we built it

  • Google cloud serverless functions for back-end endpoints with Python
  • Google cloud SQL + MySQL for storing user data.
  • Google cloud app engine for app hosting.
  • Keras + Tensorflow + OpenCV for the camera vision machine learning models for sign language recognition (CNN model trained using a kaggle dataset)
  • ReactJS for the front-end
  • Base64 encoding for sending images over REST API.
  • Bcrypt for password hashing.
  • Figma for UI design

Challenges we ran into

  • Integrating all these components in a short period of time.
  • Getting the tensorflow models to work with gcloud serverless functions.
  • Figuring out Base64 encoding.
  • Giving real-time feedback and scoring.

Accomplishments we are proud of

  • Creating a machine learning model with over 95% accuracy on the test dataset (which we consider very high considering there are 29 classes of gestures).
  • Deploying everything on gcloud serverless architecture.
  • Creating a clean and responsive web app.

What’s next

As of now, signlingo only supports ASL (American Sign Language). In the future we would like to add in support for other types such as BSL (British Sign Language) and CSL (Chinese Sign Language). Next, we would like to improve our model for a larger vocabulary. Lastly, we would like to create more sophisticated scoring and point systems to make the process of learning sign language even more fun and enjoyable for people of all ages.

Built With

+ 1 more
Share this project:

Updates