REMOTE PARTICIPATION
When we first started this project, we were thinking about what makes communication feel real. It’s not just the words we say—it’s the way we say them. A pause. A laugh. A shift in tone. A raised brow. These small cues carry big emotional weight.
But what happens when those cues are stripped away?
That’s what we realized when we spoke to members of the deaf and hard-of-hearing community. One person told us how exhausting it was to read captioned phone calls and still feel like they were missing the point. “I can see what they’re saying,” they said, “but I can’t feel how they’re saying it.” That really stuck with us.
In combination with captioning, it seems like all products that serve these individuals turn to video by default. But what if video doesn't work? What if the caller doesn't have access to video or captioning accommodations? Then accessibility to health services, government aid, and education are reduced.
So we created Nuance—a calling app that brings emotion back into captioned conversations. When someone speaks, Nuance doesn’t just transcribe their words—it shows how they feel. A digital avatar on screen mirrors their tone in real time, giving users visual cues that bridge the emotional gap.
We used physical and digital notebooks to come up with a handle of random ideas individually. After hitting over 30+ ideas in total, we came together to bridge ideas and form new ones. With the help of Perplexity to do research, we were able to identify gaps in current technology and discover an often overlooked user group: those who are hard-of-hearing. In Figma, we built wireframes, user flows, and interface prototypes. We built avatar characters ourselves using Blender—modeling, rigging, texturing, and animating it from scratch. Each facial feature, color, and movement was carefully crafted to visually represent nuances like joy, sadness, and hope. We pieced together animations towards the very end with After Effects to accommodate for Figma's limitations regarding video/GIFs.
The big initial challenge we had to tackle was how we would "design for the future" since many accessibility issues still persist in the present. What would people have on hand already? Who would have access to this technology? We challenged ourselves to think not only about intuitive design but about pairing users' daily routines with struggles so that integration would be as invisible as possible. We imagined AI as an invisible helping hand. On the technical side, a notable challenge we faced was animating and rendering expressive, responsive reactions in 3D while incorporating it into a 2D prototype. This led us to use cross-platform techniques and workarounds.
We integrated real-time ASL support into digital communication. Using the device’s video camera, Nuance can recognize American Sign Language gestures performed by the user and translate them into captions on the other caller’s screen. This feature ensures two-way accessibility—making it easier for ASL users to express themselves naturally during a call without needing to switch to typing.
When we finally saw it all come together, it felt like we were finally restoring something that had long been missing from captioned communication: emotional presence.
We’re proud of what we’ve built, not just on the technical level, but in how it reimagines what inclusive, human-centered design can be. We’ve learned that accessibility isn’t just about making things functional—it’s about making people feel seen, heard, and understood.
What’s next for Nuance? We would work on diversifying avatar options, refining our emotion detection algorithms with machine learning, and enhancing our ASL translation engine with a larger gesture vocabulary. We would also explore partnerships with hard-of-hearing creators and researchers to ensure we continue designing with, not just for, the community.
Because at the end of the day, Nuance isn’t just about talking—it’s about truly understanding each other.
! Note: some audio from the video demo did not come through! it was meant to be a demo call.
Built With
- aftereffects
- blender
- figma

Log in or sign up for Devpost to join the conversation.