Inspiration
Every emotion has its own resonance—a subtle internal frequency that shapes how we feel. When two frequencies align, resonance amplifies. It transforms.
We asked:
What if music could find your emotional resonance—and meet it precisely?
Most music apps recommend or generate songs, but they don’t understand you.
We wanted something more human: a system that listens, interprets, and gently helps you shift how you feel.
What it does
Resona is an AI‑powered emotional feedback music engine that:
- Takes multi‑modal input (voice + text)
- Extracts emotional signals (tone, energy, pitch, semantics)
- Reasons about your emotional state
- Designs a personalized, multi‑phase music plan
- Generates adaptive music in real time
- Evolves through a feedback loop
It calms anxiety, lifts low moods, and helps you refocus.
It’s not just music — it’s a guided emotional transformation.
How we built it
A full reasoning‑driven pipeline from emotion → structure → sound.
1. Emotion Extraction
- CNN‑based audio emotion probabilities
- RMS energy + pitch variance
- Text semantic analysis
- Unified emotional state fusion
2. Reasoning Layer (K2 Think V2)
- Interprets emotional cause + intensity
- Maps emotion → music‑theory parameters
- Plans a 3‑phase emotional transition (e.g., anxious → neutral → calm)
3. Structured Music Plan
- Tempo, key, scale mode
- Energy progression
- Instrumentation
- Emotional rationale
4. Music Generation
- Structured plan → MIDI
- MIDI → audio via FluidSynth (numpy fallback)
- Emotion‑specific scales, voicings, arpeggios, and humanized velocity
5. Voice Narration
- ElevenLabs narration aligned to emotional intent
6. Feedback Loop
- User feedback updates the plan
- Music regenerates and adapts in real time
7. Frontend & UI (Lovable)
Built with Lovable for rapid orchestration:
- Real‑time audio recording + text input
- Live emotional analysis
- Smooth playback of generated music
- Automatic ngrok syncing
- Clean, calming interface deployed on retunemood.tech
Lovable ties the entire system together, turning a complex backend into a simple, intuitive emotional experience.
Challenges we ran into
Turning emotion into structure
Mapping feelings → music parameters required reasoning, not just classification.Prompt engineering for K2
Ensuring consistent, valid JSON outputs while preserving creativity was non-trivial.Balancing creativity vs control
We needed music that is both emotionally expressive and predictable enough to guide mood.Multi-modal fusion
Combining voice signals and text meaning into one coherent emotional state.Dataset training
Need to spend time train the CNN + LSTM model to make it as accurate s possible.Real‑time orchestration
Ensuring the pipeline stayed fast, stable, and reactive.Frontend orchestration
Making Lovable handle real‑time audio, backend calls, UI updates, and domain‑level deployment on a.techsite—putting all the pieces together into a seamless experience.
Accomplishments that we're proud of
- Built a reasoning‑first emotional engine, not just a generative model
- Designed a full emotion → plan → music pipeline
- Tailored the CNN + LSTM model for mood recognition with different audio analysis methods like MFCC, CQT, wavelet and STFT
- Implemented adaptive feedback loops
- Created a human‑centered experience that feels personal and intentional
- Delivered a system that is both technically deep and emotionally meaningful
- Built a polished, responsive UI using Lovable, deployed on a
.techdomain
What we learned
- Emotion is not just classification—it’s a trajectory
- Music can be structured as a controlled transformation process
- Reasoning models (like K2) unlock a new layer of AI capability:
not just generating outputs, but making decisions - The most impactful AI systems are:
- interpretable
- adaptive
- human-centered
- interpretable
- UI matters: the emotional experience is shaped as much by the interface as the audio
What's next for Resona
Resonance visualization
We plan to add a dynamic, particle-based animation that visualizes the music’s frequency and emotional flow in real time. This creates a multi-sensory experience—helping users see their emotional resonance, improving focus, engagement, and self-awareness.Creative inspiration through visualization
The resonance animation is not just visual—it’s cognitive. By translating music into flowing frequency patterns, it helps users enter a state of imagination and creative flow. By matching your emotional resonance, Resona can enhance ideation, creativity, and deeper engagement.Narration That Matches the Mood We plan to refine the narrator’s voice so it aligns more closely with the target emotional state—less serious, less robotic, and more attuned to the mood the music is guiding you toward.
More Expressive, Less Drastic Music Generation Future versions will tune the generative engine to produce transitions that feel smoother, more expressive, and emotionally nuanced—while still maintaining the unique character of each emotional arc.
Deeper personalization
Learning individual emotional patterns over time to refine how music responds to each user.Wearable integration
Incorporating physiological signals (heart rate, stress levels) to enhance emotional understanding.Psychology-grounded design
We aim to ground Resona in established psychological and music therapy research, validating its impact through studies and clinical testing.Toward guided music therapy
Evolving Resona from an adaptive music experience into a structured, research-backed emotional support system.
Long-term vision
To build AI that doesn’t just generate music—
but understands your inner state, resonates with it,
and helps guide you toward a healthier, more balanced emotional state.

Log in or sign up for Devpost to join the conversation.