Inspiration
430 million people globally have hearing loss, yet 98% of the internet is largely inaccessible to them. We initially thought, "Just add captions," but text isn't enough. For many Deaf people, English is a difficult second language, and captions don't carry the tone and the specific grammar of the message. Then we realized we needed another approach and to implement a new infrastructure entirely.
What it does (Our Proposed Solution)
Our plan is to build a system where spoken audio is converted into a real-time 3D avatar that signs. Crucially, it changes the English grammar to match sign language syntax and uses facial expressions to convey tone and emotion. This approach provides essential privacy for users and promotes autonomy and acceptance within the workforce and existing education systems.
How we plan to build it
We've focused intensely on the architecture and the blueprint. We designed a three-part system: First, an NLP (Natural Language Processing) layer listens to the audio to understand the meaning and tone. Second, a logic engine switches the sentence structure from English syntax to sign language syntax. Finally, a rendering engine draws the 3D avatar right on the user's device. This method sends lightweight code instructions, not heavy video files, meaning it should work even on slow mobile networks in developing countries.
Challenges we ran into (During Research)
The biggest challenge during the research phase was processing speed Specifically, the time delay required to convert spoken audio into sign language output instantly. We needed to figure out how to provide an almost instantaneous translation without massive computational power slowing us down. Generating every sign from scratch in real time will prove to be too slow. We plan to use pre-trained and motion-captured data. Instead of generating signs in real time, we will be building a large library of essential, high-accuracy academic terms captured from human signers. The app sends instructions to pull these pre-existing, lightweight visual assets locally, which keeps the response fast enough for real-time conversation.
Accomplishments that we're proud of
We proved the model works theoretically based on current technology; we successfully mapped out a way to potentially replace scarce human interpreters with software. We also designed a system that fundamentally respects user privacy. We figured out a feasible blueprint for a Deaf person to open a bank account or attend a lecture without relying on a third party.
What we learned
We learned that sign language isn't just hands; the face provides 50% of the context. If the avatar doesn't smile or frown, the message is lost. That insight completely changed how we designed our proposed tech stack.
What's next for Signapse
Now we move from planning to building. We are starting with Signapse Classroom, a simple tablet application designed for students. We plan to test it in a pilot program to see if it improves engagement and information retention. Once we validate that the model works effectively in a real classroom setting, then we can expand into the community.

Log in or sign up for Devpost to join the conversation.