SignBridge — Project Story
Inspiration
SignBridge was inspired by the need to reduce communication barriers between hearing and deaf communities. I wanted to build an accessibility-focused AI project that combines sign language recognition with speech-to-text to support real-world conversations.
What I Learned
I learned how to use transfer learning with TensorFlow, integrate speech recognition APIs, and design a real-time interface. Training helped me understand how models improve by minimizing loss: $$ L = -\sum y \log(\hat{y}) $$ I also improved problem-solving, debugging, and explaining technical ideas clearly.
How I Built It
I collected sign images using OpenCV, trained a MobileNetV2-based model, and added custom layers for classification. The system uses speech recognition to convert voice into text, and a frontend dashboard displays live predictions and transcripts.
Challenges
The biggest challenges were small datasets, unstable API connections, and maintaining real-time performance. I used data augmentation, retry logic with exponential backoff, and model freezing to improve stability.
Reflection
This project showed me that impactful AI is not only about accuracy but also usability and social impact. My goal is to expand the dataset, improve accuracy, and develop SignBridge into a scalable accessibility tool.

Log in or sign up for Devpost to join the conversation.