Inspiration
I first felt the problem in a very personal way at church. I noticed Deaf and hard-of-hearing congregants were seated in a separate area so they could maintain a clear line of sight to the interpreter. It was not about preference, it was about access. That moment made me ask a simple question: why should inclusion require separation? If churches already use screens for lyrics and announcements, then accessibility should be able to live on those same screens—so everyone can sit together and participate equally. That experience became the motivation behind Signa: a practical, real-time way to translate spoken words and displayed text into sign language so Deaf congregants can follow the service from anywhere in the sanctuary, with dignity and full participation.
What it does
Signa is an accessibility solution that converts spoken words or prepared text into sign-language output that can be shown on church screens, projectors or livestream overlays.
In simple terms:
- People speak (or text is provided)
- Signa converts it into structured text
- The system generates sign-language output (sign visuals/animated signer panel)
- The signing is displayed on the same screens the whole church uses
The goal is to ensure Deaf and hard-of-hearing congregants can sit anywhere and still fully understand and participate.
How we built it
Signa is designed as a lightweight pipeline that can run during live events:
- Input: The system accepts either live audio (speech) or prepared text
- Speech-to-Text (if audio): Audio is transcribed into text in real time.
- Language Processing: The text is cleaned and segmented into sign-friendly phrases (because sign language grammar differs from spoken language).
- Sign Generation: The processed text is mapped into sign outputs using a curated sign vocabulary and a model that supports word-level translation (with a roadmap toward sentence-level translation).
- Display Output: The sign output is rendered as an on-screen signing view (e.g., avatar/video panel) that can be embedded onto existing church screens, livestream overlays, or projectors.
This approach makes Signa usable beyond church, classrooms, hospitals, government offices, customer service desks and media.
Challenges we ran into
- Latency and smoothness: Real-time translation must feel natural. We optimized the pipeline by streaming small chunks of text and caching repeated phrases (e.g., common church terms).
- Sign language structure: Direct word-for-word translation can be inaccurate. We introduced phrase segmentation and a roadmap for sentence-level signing to improve meaning preservation.
- Coverage of vocabulary: Churches use unique terms (names, places, biblical references). We designed the system to allow easy vocabulary expansion and custom “church packs.”
- Trust and usability: Accessibility tools must work consistently. We focused on a simple interface and clear outputs that are easy to validate in a live setting.
Accomplishments that we're proud of
- Built a clear MVP concept that fits into existing church infrastructure (screens, projectors, livestreams).
- Designed an end-to-end flow from audio/text → processed language → sign output → display.
- Identified a strong real-world use case with immediate impact: inclusion without separating Deaf congregants.
- Defined a scalable approach that can extend beyond church to classrooms, clinics, and public services.
- Created a roadmap for vocabulary expansion and improved accuracy over time through user testing and iteration.
What we learned
Building Signa taught us that accessibility is not a “nice-to-have”; it is a core design requirement.
Key lessons:
- Inclusion is both technical and social: the real success metric is not only accuracy, but whether people feel they belong.
- Word-for-word translation is not enough: sign languages have different structure, so we must prioritize meaning through phrase segmentation and sentence-level improvements.
- Latency matters: in live settings, delays break understanding—so streaming small chunks and optimizing the pipeline is essential.
- Real-world adoption requires low friction: churches will only use solutions that integrate smoothly into existing screens and workflows.
What's next for Signa
In the next phase, we will:
- Expand sign vocabulary and improve sentence-level translation quality.
- Test the MVP with Deaf users and interpreters for feedback on accuracy and clarity.
- Integrate with livestream workflows so online viewers also benefit.
- Build partnerships with churches and Deaf organizations to co-design the next iteration.
Signa is not just a project; it is a commitment to inclusion—so no one has to be separated to belong.
Built With
- nextjs
- python
- tensorflow
Log in or sign up for Devpost to join the conversation.