Inspiration
Inspired by the need for inclusive communication, SignSync was born from a vision to seamlessly integrate the deaf and hard-of-hearing community into everyday conversations, where they're able to communicate their emotions and tell stories without the need for them to type down.
What it does
SignSync uses an advanced AI Algorithm for real-time, accurate sign language interpretation, ensuring effective communication across various settings.
How we built it
We developed SignSync with a Random Forest AI algorithm and machine learning, focusing on real-time sign language recognition and translation. The model takes the landmarks of hands and compares them against 28,000+ pictures that train the model. We took all the pictures ourselves.
Challenges we ran into
Perfecting the model by collecting tens of thousands of pictures took a long time. A script was made to take the pictures to help with the tracking.
Accomplishments that we're proud of
Successfully creating an intuitive, real-time interpretation tool that bridges the communication gap for the deaf and hard-of-hearing community. Collecting 28,000 images took only 30 minutes because of the innovation we did in collecting the data.
What we learned
We gained insights into AI's potential in language interpretation and the importance of accessibility in technology.
What's next for SignSync
We aim to enhance SignSync's functions in training the model with new words in the American Sign Language (ASL). There are about 140,000 words in the ASL, and data collection is something we've innovated. Additionally, we'd like to make this a mobile app to create ease of accessibility for the user.
Built With
- mediapipe
- opencv
- python
- scikit-learn

Log in or sign up for Devpost to join the conversation.