Inspiration

Music is one of the richest forms of human expression, but for deaf and hard-of-hearing individuals, it is often reduced to fragmented captions or text that fails to capture rhythm, expression, and flow.

We were inspired by a gap that goes beyond accessibility, where there's no existing system that helps users to understand music as a structure, and even one that lets them recreate it meaningfully within the community.

Signify was created from a simple starting point: music shouldn't just be translated, but be made visually expressive and emotionally understandable.

What it does

Signify is a music accessibility and expression platform that lets users select a song and immediately view it through a visual interpretation layer. It translates songs into three synchronized components:

  1. A sign language avatar that performs the lyrics in real time
  2. An emotion layer that visualizes the mood of the song through the change of color
  3. A beat visualization that represents pacing and structure

Users can then choose to record their own visual interpretation of the song using the components, turning the experience into a personal performance. These recordings can be shared with the community feed, allowing users to express, reinterpret, and exchange music visually.

How we built it

Signify is designed to help deaf and hard-of-hearing users experience music more completely and expressively.

UX Research Process

Online Community Research: To understand real user needs, we analyzed discussions from deaf communities on platforms like Reddit and comments on YouTube videos about deaf music experiences. This helped us identify recurring pain points, such as difficulty in understanding the lyrics, the absence of emotional tones, and the lack of tools that combine rhythm and meaning.

Problem Validation: From the research, we confirmed that existing solutions are fragmented. Captioning tools focus only on lyrics translation, while other approaches, such as vibration-based systems, fail to capture emotional depth or provide a complete experience.

Competitive Analysis: We evaluated current tools such as live captioning apps, lyrics translating platforms, and sign language interpreters. We found that none of them integrates sign language, emotion, and rhythm into a single comprehensive system.

Sketching and Feature Direction

Based on our research, we explored two main directions of making a tool for understanding music and a platform for sharing music experiences. We used Gemini to help refine our ideas by exploring the approaches to structure music into visual components, helping us to iterate more efficiently on how meaning and feelings could be represented. We started with drawing low-fidelity sketches using Notability to brainstorm and visualize our app without overwhelming the users. After several iterations, we moved into mid-fidelity designs in Figma and refined the layout. We played with different ways of organizing the interface to ensure the user could easily follow the song and engage with the visual components.

UI elements

We focused on designing a UI that is approachable and meaningful for deaf and HOH people.

We designed the sign language avatar located at the center of the interface, translating lyrics in real time so users can follow the core meaning of the song.

To support different preferences, we included customization settings that allow users to toggle elements, such as color, lyrics, and avatar visibility.

To represent emotion, we used changes in color intensity on the edges, allowing users to quickly sense shifts in mood without needing to read labels.

For rhythm and beats, we introduced volume bar changes in motion and intensity, and the haptics extension to help users understand pacing through visual and sensory cues.

Challenges we ran into

One of our biggest challenges was defining the core purpose of the product, whether Signify should focus on music consumption (helping users understand songs) or music expression (helping users create and share interpretations). Our team explored both directions, which led to multiple iterations of the core feature set, layout, and user flow.

At the same time, many of our teammates had no or limited design experience, which led to challenges in translating complex ideas into a working prototype, given our unfamiliarity with Figma.

Accomplishments that we're proud of

We are proud that Signify transforms music into something users can see, understand, and share. We are especially proud of how closely the animations match our original vision, achieved through extensive troubleshooting and innovative thinking to push beyond the limitations of Figma.

We truly believe that Signify offers the deaf community an opportunity to experience and interact with music in ways they have never experienced before, and points toward a future where music is fully accessible and shared by everyone.

What we learned

Emotions connect people across experiences, and music is one of the strongest ways to express them, yet its accessibility is often taken for granted by hearing individuals. Building Signify challenged us to rethink music beyond sound, but as a collection of meaning, emotions and structure that should be approachable to everyone. Through our research and design process, we developed a deeper understanding of the deaf community and their experiences. More importantly, we realized how much opportunity there is to rethink "standard" ways of experiencing media and how accessibility opens up more forms of interaction.

What's next for Signify

Signify reconstructs meaning and expression in music, providing a way for deaf users to connect through a shared community.

Next, we would want to evolve Signify into a more complete creative platform by improving the expressiveness of our sign language avatar **to better capture the nuance in human emotions. We would also love to deepen our use of **Gemini by analyzing lyrics and past performance data to better picture the emotional shifts and rhythm changes across a wider range of songs. We also aim to introduce remix and team-matching functions where users can build on and reinterpret each other's visual performances, turning music into a collaborative experience.

Ultimately, we aim to strengthen the community features to support sharing and interaction within the deaf community, growing Signify into a space for visual music storytelling and collaboration.

Built With

Share this project:

Updates