Inspiration

Over 70 million people globally suffer from hearing loss related disorders. There is an acute disparity between advances in general purpose communication technologies such as speech to text and those designed for the disabled. No automated and scalable platform exists for disabled individuals to translate their sign language for a variety of purposes. This changes with Signly.

What it does

Signly uses camera vision and machine learning to recognize ASL (American Sign Language) hand gestures, which include generic phrases as well as specific alphabets. Signly has limitless potential in many areas, one of them being education - to power an automated platform to learn and practice sign language (much like what Duolingo is for learning regular languages).

How we built it

  • Python + OpenCV for camera vision
  • Mediapipe hands and pose models for annotations
  • Scikit-learn random forest model on top of Mediapipe for gesture recognition

Challenges we ran into

  • Getting decent accuracy on gesture recognition.
  • Data pre-processing (resizing and repositioning mediapipe annotations to feed into a custom model)

Accomplishments that we're proud of

  • Getting the recognition to work seamlessly
  • Getting the code working for both alphabets and phrases
  • Getting everything working in < 36 hrs!

What I learned

  • Use of different machine learning models such as RandomForest, SVM classifier, KNN, Logistic regression etc.
  • OpenCV and mediapipe model implementation

What's next for Signly

I think that the most compelling product that could use such a technology would be an automated platform to learn sign language through interactive exercises and quizzes. This would be the future steps for Signly.

Share this project:

Updates