What it does
Signbridge takes recorded videos of American Sign Language, processes them with machine learning models, and translates them into written English. The output is a translated text sequence that bridges communication between the Deaf and hearing communities.
How we built it
We integrated Hugging Face Hub models with T5 and MovieNet, leveraging their strengths in sequence translation and video understanding. This combination allowed us to process video data and generate meaningful text translations.
Challenges we ran into
We faced model accuracy limitations and testing hurdles, especially when working with diverse signing styles and video conditions. These challenges pushed us to refine preprocessing and testing workflows.
Accomplishments that we’re proud of
We successfully built a functioning prototype capable of translating ASL videos into English. It represents an important first step toward breaking language barriers with AI-powered tools.
What we learned
Along the way, we deepened our skills in Git and GitHub collaboration, streamlined environment management with Pipenv, and gained valuable insights into working with multimodal ML models.
What’s next for Signbridge – ASL Translator
We’re excited to expand functionality with file sharing, improve translation accuracy, and explore reverse translation (English to ASL) to make Signbridge a truly two-way communication tool.
Built With
- css
- flask
- html
- javascript
- movienet
- python
- t5
- tensorflow
- torch

Log in or sign up for Devpost to join the conversation.