Inspiration

We were inspired by the need for better communication tools for non-verbal individuals. Recognizing the barriers they face in daily interactions, we aimed to create an inclusive solution that empowers everyone to express themselves and connect with others.

What it does

SignPal is an innovative application that translates sign language gestures into real-time English captions that friends and family of non-verbal individuals can use.

How we built it

We developed SignPal using Deep Learning techniques. We used mediapipe framework to recognize real-time sign language gestures and trained the model using TensorFlow, LSTM architecture. We incorporated OpenCV for video processing and real-time gesture capturing.

Challenges we ran into

Throughout development, it was challenging to find the Sign language word dataset, high model training time, ensure real-time performance, and create a user-friendly interface.

Accomplishments that we're proud of

We successfully created a prototype that translates 10-15 common sign language gestures into text in real-time. Our team collaborated effectively, overcoming technical hurdles, and received positive feedback from initial users during testing.

What we learned

We gained valuable insights into deep learning and computer vision. Additionally, we learned about the significance of accessibility in technology and how it can positively impact individuals' lives.

What's next for SignPal: Real-Time Sign Language Translation

Moving forward, we plan to expand the model’s vocabulary by training it on larger datasets, incorporate additional features like audio feedback, and enhance the user interface for a more seamless experience. We aim to collaborate with organizations supporting non-verbal individuals to further refine our application and increase its accessibility.

Built With

Share this project:

Updates