Inspiration

Modern life is overwhelming—constant notifications, fast-paced content, and cognitive overload. While AI accelerates everything, we were inspired by the opposite: slowing down. With wearable devices becoming mainstream, we saw an opportunity to turn biometric data into something deeply human—real-time emotional regulation through music. Instead of reacting to stress, what if your environment could proactively calm you?

What it does

CalmBeat is a bio-responsive music system that dynamically adjusts audio based on your physiological state. Reads real-time biometric data (heart rate, HRV, stress levels) Detects emotional state using AI models Generates or mixes ambient music tailored to your current condition Gradually guides the user toward a target “calm zone” Adjusts tempo, frequency, rhythm density, and sound texture in real-time Optional: syncs with lighting systems and notification controls for full environment adaptation Example: If your heart rate spikes, CalmBeat slows BPM, reduces high frequencies, and introduces soft ambient textures to gently stabilize your nervous system.

How we built it

Frontend: React + Web Audio API for real-time sound control Backend: Python (FastAPI) for AI inference and biometric processing AI Layer: Stress detection model using heart rate + HRV Generative music engine (rule-based + ML-assisted composition) Integration: Wearables (Apple Watch / Fitbit APIs) MIDI + digital audio engine for live mixing Sound Design: Procedural ambient layers (pads, textures, binaural-style effects) Adaptive BPM and harmonic scaling

Challenges we ran into

Real-time latency: Syncing biometric input with audio output without delay Data noise: Wearable data can be inconsistent or laggy Music generation quality: Making AI-generated audio feel natural, not robotic Personalization: Different users relax in different ways—no universal “calm” Hardware limitations: Not all wearables provide full biometric access

Accomplishments that we're proud of

Built a working real-time biofeedback loop between body and music Achieved smooth transitions in adaptive sound without breaking immersion Created a system that feels intuitive, not clinical Designed a calming experience that DJs, creators, and everyday users can all enjoy Successfully demonstrated stress reduction scenarios in live testing

What we learned

Music is one of the most powerful real-time emotional interfaces Biofeedback systems must prioritize subtlety over aggression AI doesn’t need to replace creativity—it can augment human feeling Latency and UX matter more than model complexity in wellness apps Personalization is the key to meaningful calm-tech experiences

What's next for CalmBeat: Bio-Responsive AI Music Engine

Train personalized AI models per user (adaptive learning over time) Integrate with smart home systems (lights, temperature, environment) Add guided breathing + rhythm synchronization features Expand into DJ/live performance mode (bio-reactive sets 👀) Build a mobile app with offline capabilities Partner with wellness platforms and mental health providers

Built With

Share this project:

Updates