Inspiration

From the start, we wanted our hackathon project to make a visible, meaningful impact by addressing a real-world need. As we explored options, we considered productivity tools, information summarizers, and projects from past hackathons. Ultimately, we chose to focus on enhancing inclusivity for the deaf community by creating a platform that teaches sign language. This solution not only empowers people to communicate more effectively with deaf individuals but also promotes inclusivity and awareness.

What it does

Our platform provides users with accessible resources to learn sign language. Users can explore signs for letters, numbers, and a few essential words through topic-specific quizzes. These quizzes enable users to reinforce what they've learned by associating each sign with its meaning, making sign language more approachable and engaging.

How we built it

The core of our project is a machine learning model trained on visual data captured using the cv2 computer vision library. For each sign, we collected 100 image samples of letters, numbers, and words, creating a robust dataset for training. We then used this data to train a classifier that recognizes signs made by users in real time via their video feed. To bring it all together, we built a Flask-based website where users can test their skills in a quiz format, making the learning experience interactive and immersive.

Challenges we ran into

One of our biggest challenges was developing the machine learning model, as we had to learn key concepts on the fly. A significant hurdle emerged during the quiz implementation when we realized that our system was processing the video feed backward, sending the model’s output to the user rather than receiving input from them. Fixing this required a major code overhaul, but it ultimately made our platform more intuitive.

Accomplishments that we're proud of

We are proud of creating a working platform that brings sign language learning to life through interactive quizzes, powered by real-time sign recognition. We’re excited about the progress we made with our machine learning model and how it enhances the user experience by recognizing signs in real time.

What we learned

Working on this project deepened our understanding of machine learning and data processing. We learned firsthand how to train a model, capture and refine data, and evaluate its performance. This experience was invaluable for both those with prior ML knowledge and newcomers alike, as we all gained practical insights into building real-world applications.

What's next for the Sign Language Learning Platform

We envision expanding our platform in several ways. Adding a points system and user rankings, similar to Duolingo, could boost engagement and motivation. To make our model more inclusive, we plan to capture a wider variety of hand shapes and sizes. Additionally, we aim to extend our model's capabilities to recognize dynamic, motion-based signs, overcoming its current limitation of only recognizing static signs. These future developments will make our platform even more accessible and comprehensive.

Built With

Share this project:

Updates