Inspiration
I've always been fascinated by the gap between how we feel and what our screens show us. Every app looks the same whether you're calm, exhausted, or wired. I wanted to build something that actually responds to you — not your preferences, but your body in real time.
What I Learned
Working with the Claude API taught me that AI can do a lot more than answer questions. When you feed it biometric signals and ask it to interpret rather than just classify, the results are surprisingly nuanced. I also learned how much data wearables already collect that nobody is using creatively.
How I Built It
MoodWave is built around a simple pipeline:
$$\text{Biometrics} \xrightarrow{\text{Claude API}} \text{Mood JSON} \xrightarrow{\text{Visual Engine}} \text{Generative Display}$$
The app reads heart rate, HRV, and sleep quality, sends them to Claude, and receives back a mood label, color palette, motion style, and tempo. That JSON drives a full-screen generative canvas that renders on any display — phone, monitor, or room projector.
Challenges
The hardest part was translating raw biometric numbers into something emotionally meaningful. Heart rate alone tells you very little — but combined with HRV and sleep quality, patterns emerge. Designing a Claude prompt that reliably interprets that combination consistently was the biggest technical challenge of the project.
Log in or sign up for Devpost to join the conversation.