Inspiration We've all been in a room where something felt off before anyone said a word. A meeting that started tense and never recovered. A conversation where the silence said more than the words. We can track the weather, our steps, our sleep, but the emotional atmosphere of a room? Invisible. Atmos was built to change that. If people had a real-time read of the emotional temperature around them, they could intervene before tension becomes conflict and show up more fully for the people they're with.

What it does Atmos is a real-time emotional atmosphere awareness app. It uses biometric data from wearables (heart rate variability, skin conductance) alongside acoustic signals from your phone to generate a live tension score for yourself and the space around you.

The Emotional Compass is the core screen: a pulsing organic radar that shifts colour as tension rises, from calm blue through tense orange to stressed red. The Atmosphere feature uses AR overlays to show individual tension signatures above each person in the room. The Insights tab gives a post-session debrief covering your stress arc, emotional moments, and body correlations. The What If module surfaces early warnings, typically 60 to 90 seconds before an emotional shift becomes a conflict.

Everything is opt-in and interpretive, never diagnostic.

How we built it We started in Figma, building a full design system with dual dark/light themes, an emotion-mapped colour palette, and a consistent visual language across eight screens. The frontend is React with TypeScript, animated with Motion/React, and powered by a custom SVG engine for the components that needed more than any charting library could offer. The Emotional Radar, a 12-point animated blob with a rotating sweep line and live colour mapping, was built entirely from scratch using requestAnimationFrame.

The AR Atmosphere layer simulates real-time tension halo compositing over tracked person figures, with live drift and per-person inspection. The What If predictions and Compass Guidance are powered by the Anthropic Claude API, constrained to emotional coaching and atmosphere analysis. We also built two standalone interactive HTML demos, one for AR Atmosphere and one for the Insights tab, runnable in any browser with no dependencies.

Challenges we ran into The hardest problem wasn't technical, it was ethical. Visualising emotional data in a way that feels informative rather than judgemental required seven iterations of the radar and halo design before we landed on something that felt honest. We also had to design a consent architecture that was structural, not cosmetic: opt-in toggles, a three-step onboarding consent flow, and hard constraints on what the AI is allowed to conclude. The AR compositing pipeline was the sharpest technical challenge. Bridging from a high-fidelity simulation to a real device camera feed will require a full ARKit and CoreML integration, which we are actively scoping.

Accomplishments that we're proud of We're proud of the Emotional Radar, one component that communicates the entire product in a glance. We're proud that the consent framework is built into the product itself, not bolted on as a disclaimer. And we're proud that the core concept held across three completely different real-world scenarios (a board meeting, a college seminar, a family gathering), which tells us the underlying signal model generalises.

What's next for Atmos We want to integrate live biometric data from Oura Ring and Apple Watch, build the on-device CoreML inference layer for real acoustic signal processing, and ship the AR feature on iOS via ARKit. From there: a professional portal for coaches and therapists, a multi-user shared atmosphere mode, and independent ethical review of the inference model. The long-term vision is Atmos as ambient infrastructure, embedded in the spaces where the most consequential human interactions happen.

Built With

Share this project:

Updates