Inspiration

The idea started with a question I couldn't answer: Why do I sometimes walk into a room feeling fine and leave feeling anxious?

Most wellness tools measure how you feel. Heart rate, HRV, stress score. None of them ask why. And the "why" turns out to be the most useful part.

There's a real physiological phenomenon behind this – your nervous system doesn't just generate emotional states, it absorbs them from the people and environment around you. Emotional contagion is documented. Barometric pressure affects mood. Sleep debt distorts perception. But no tool attributes a feeling to its actual source in real time.

That gap is what STRATA is built to close.

What it does

STRATA tracks your body's sensations in real time and tells you exactly where each feeling is coming from.

Most of what you feel has a source – a stressed colleague nearby, a drop in air pressure, low blood sugar, an old memory surfacing before a meeting. STRATA separates these simultaneously, attributes each one, and gives you a targeted way to release what isn't yours.

You wear a biosensing undershirt and AR glasses. The app shows a live body map – chest, shoulders, gut, throat, with each sensation broken down across four sources: Social, Environmental, Physiological, and Psychological. When something is elevated, STRATA tells you why, and offers a 30-second intervention matched to the actual cause.

The result: you stop carrying other people's stress as if it were your own.

How we built it

STRATA is a three-part ecosystem:

  • Strata Undershirt – passive biosensing layer (HRV, skin conductance, muscle tension, breathing, temperature)
  • Strata Glasses – outward-facing AR sensors + a 3-tier intervention system that is invisible to everyone around you
  • Mobile + Web App – real-time source attribution, body map, relational constellation, and guided intervention flows

The core engine is interoceptive source attribution – mapping every sensation to one of four simultaneous sources:

Challenges we ran into

The hardest part wasn't building the sensors. It was building trust. Designing calm into every word. Early versions used clinical language – "Chest Impact · High" – that triggered anxiety instead of clarity. Every notification, every label, every CTA had to be rewritten until it sounded like a trusted friend, not a medical alert. Numbers without context cause panic. "61% · High" means nothing and frightens everyone. The entire data display system had to be rebuilt around meaning first, metric second – "58% absorbed from someone nearby" instead of a raw score. The misattribution trap. A tool that attributes feelings to other people could easily become a blame weapon. We had to build the safeguard into the design itself – aggregate-only social data, confidence percentages always visible, language that says "likely contributing" never "causing." Designing for invisibility. Every AR intervention had to work mid-meeting, mid-conversation, without anyone nearby noticing. That constraint touched every animation duration, every visual placement, every piece of copy. Knowing when the model is wrong. The attribution engine doesn't always have enough signal. Showing honest uncertainty, rather than hiding it behind false confidence turned out to make the product more trustworthy, not less.

Accomplishments that we're proud of

"58% of what you're feeling isn't yours." Getting to a place where a wellness tool could say something that specific, that honest, and that immediately useful – that's what we're most proud of. Not the sensors, not the visualisation. The sentence. The intervention works as a design object. The 6-screen Quick Intervention flow, from alert to source attribution to guided breath to resolution feels like one seamless, calm experience. Nothing alarm-like. Nothing clinical. A 30-second journey that respects the user's context and gives them something measurable at the end. The Relational Constellation. Mapping social proximity as orbital distance – not a list, not a bar chart, but a living spatial map of who affects your nervous system and how much, is a genuinely new interface paradigm for emotional data. Safeguards as design decisions. Every ethical risk we identified became a visible design choice – the confidence percentage, the aggregate-only social data, the avoidance pattern detector, the permanent employer access block. Building ethics into the UI rather than burying them in a privacy policy. It holds together as a world. From the notification to the AR glasses to the body map to the resolution screen – STRATA has a design language, a voice, and a philosophy that stay consistent all the way through.

What we learned

The most surprising insight: the body map is not the product. The reframe is the product. When someone sees "58% of what you're feeling isn't yours" – that sentence alone changes how they relate to their own nervous system. Everything else – the visualisation, the intervention, and the history serves that one moment of recognition. Language is a design material. The difference between "Chest Impact · High" and "That tightness isn't yours" is not just tone, it's the entire product philosophy made visible. Every word either builds trust or destroys it. We rewrote every label, every notification, every CTA until it passed one test: would a calm, trusted friend say it this way? Uncertainty is a feature, not a bug. Showing confidence percentages honestly "likely contributing" rather than "causing" made the product more trustworthy, not less. Users don't need certainty. They need honesty. Ethics have to live in the UI. Safeguards buried in a privacy policy don't build trust. Safeguards visible in the interface do. The permanent employer access block, the aggregate-only social data, the misattribution warnings – these aren't compliance features. They're design decisions. Calm is hard to design. It's much easier to make something urgent than something reassuring. Making STRATA feel like a trusted presence – never alarming, always clear, always on your side, was the hardest design problem we faced.

What's next for STRATA, "Know where your feelings come from."

STRATA is a speculative product today. But none of the underlying science is speculative. Emotional contagion is documented. Barometric pressure affects mood. HRV measures nervous system state. Biosensing fabrics exist. AR glasses exist. The gap between what STRATA imagines and what is technically possible is smaller than it looks. The immediate next step is calibration research. The attribution model needs real training data – controlled studies mapping biosensor inputs against known emotional sources. That's the hard scientific work that turns the concept into a product. Clinical collaboration is the second horizon. STRATA was designed from the start to work alongside therapists, not replace them. The next version deepens that – weekly summaries, session preparation, pattern reports that give clinicians something they've never had before: objective interoceptive data between appointments. HSP and high-absorber communities come first. The people who need this most already know they need something like it. Highly sensitive people, therapists, caregivers, teachers – anyone who spends their day in close emotional proximity to others. That's the beachhead. The long arc is a new literacy. STRATA isn't just a wellness tool. It's an attempt to give people a working model of their own nervous system, one that includes other people, the environment, and time. If it works, users don't just feel better. They understand themselves differently. That's worth building.

Built With

  • figma
  • make
+ 138 more
Share this project:

Updates