Inspiration

The inspiration behind Percevia began with something deeply personal.

I have misophonia, a condition where certain sounds trigger intense emotional and physiological responses. In school, this became something I had to quietly manage every single day. The sound of someone chewing, clicking a pen, or tapping on a desk could send me into a state of rage or anxiety so consuming that I could no longer focus on the lecture in front of me. I was fighting my own nervous system while everyone else around me was just taking notes. Rather than asking classmates to stop what they were doing, I started asking a different question: "Do you know what misophonia is?" Almost every time, the answer was no. So I would explain it, that certain sounds produce a neurological reaction in some people, that it is not a choice or an overreaction, and that I personally experience it. That small act of education opened a conversation instead of creating a conflict. But it also made me realize: if the people closest to me had never heard of this condition, how many others were suffering silently without even a word to describe what they were going through? During the design process I wanted to expand beyond just my own discomfort. I asked friends and family members about their own sensory experiences. A family member brought up veterans with PTSD, where sound becomes a portal back to trauma. A friend brought up autistic individuals, some of whom experience genuine physical pain from certain sounds and volume levels. Three very different communities, all navigating the same invisible problem: a nervous system hijacked by sound.

What it does

Percevia is a bone-conduction AI system designed to help people with auditory sensitivities regulate their emotional and physiological responses to trigger sounds — in real time, without disrupting their environment.

The system has two parts:

The bone conduction device is worn at the temple or behind the ear. It detects the vibration signatures of trigger sounds in the user's environment and communicates that data to the AI instantly. Unlike noise-canceling headphones, it does not block the world out , the user remains fully present while the device works quietly alongside their nervous system. The AI is trained on the user's personal trigger profile. When it detects a known trigger, it sends a personalized counter-frequency back through the device, below the level of conscious hearing, before the nervous system has time to fully escalate. The regulation is subtle and continuous, like a steadying hand before the panic sets in. The companion app gives users an emotional dashboard to track their patterns over time: which sounds triggered responses, how intense each reaction was, how quickly the system regulated it, and what conditions made certain days harder than others. Over time the AI refines itself to the individual, becoming more precise the longer it is used.

How we built it

Percevia was built as a speculative design project rooted in real human need. The process began with personal experience and community research — listening to people whose sensory lives are rarely centered in product design.

The concept was developed through:

  • User research: conversations with people who have misophonia, family members of veterans with PTSD, and individuals in neurodiverse communities to understand the range of lived experiences the tool needed to serve.
  • Sensory science: exploration of how bone conduction technology works, how the nervous system processes auditory input, and how frequency and vibration interact with emotional regulation
  • Design iteration: developing the visual identity, interface design, and interaction model for both the device and the companion app, grounded in a warm, calming aesthetic that reflects the emotional context of the user
  • Systems thinking: mapping the full experience from onboarding to daily use to long-term pattern recognition, with safeguards and fail-states built in at every stage

Challenges we ran into

The most significant challenge was scope. Misophonia, PTSD, and autism each represent deeply complex, distinct experiences. Designing a single tool that could meaningfully serve all three without flattening the differences between them required constant recalibration. The risk of oversimplifying was real, and staying honest about what the tool can and cannot do was an ongoing tension throughout the process. A second challenge was designing around invisibility. These conditions are largely invisible to others, which means the tool itself needed to be invisible too, working in the background without drawing attention, without requiring the user to explain themselves, and without creating new social friction in the very situations it was designed to help with.

Accomplishments that we're proud of

The accomplishment we are most proud of is simply naming the problem clearly. Misophonia affects a significant portion of the population, yet most people have never heard the word. Designing Percevia required building a shared language around experiences that are often dismissed, minimized, or misunderstood and that work felt meaningful independent of the tool itself. We are also proud of the design cohesion of the final product. The visual identity, the interaction model, the tone of the interface, and the logic of the AI system all feel like they belong to the same world, one that takes the emotional experience of the user seriously rather than treating it as a technical problem to be solved.

What we learned

This project taught us that the most underserved users are often the ones whose needs are hardest to see. Misophonia, PTSD triggers, and sensory sensitivities share the quality of being invisible to everyone except the person experiencing them. Products rarely get built for invisible problems. We learned that personal experience is a valid starting point for design, not a bias to be corrected, but a source of genuine insight that external research alone cannot replicate. Knowing what it feels like to leave a classroom because someone is chewing made every design decision sharper and more grounded. We also learned how much language matters. Many people suffering from these conditions don't have words for what they experience. Part of what Percevia offers is not just a tool but a framework, a way of understanding and communicating something that previously had no clear name.

What's next for Percevia

The immediate next step is clinical consultation. Bringing audiologists, neurologists, and mental health professionals into the design process would ground the speculative elements of the system in medical reality and help define what responsible claims the product can and cannot make. From there, the focus would shift to hardware prototyping, working with bone conduction technology manufacturers to develop a device form factor that is discreet, comfortable for extended wear, and sensitive enough to detect the specific vibration signatures the AI needs to do its work.

On the software side, the priority would be building out the personalization engine, the mechanism by which the AI learns an individual user's trigger profile over time and becomes more responsive and precise with each interaction.

Percevia: Hear more. Feel less. Live freely.

Built With

  • figma
  • figmamake
  • supabase
Share this project:

Updates