Inspiration Everyone has days worth remembering — a breakthrough at work, a peaceful morning walk, or an evening with friends. But journaling feels like a chore, and photos don't capture how you felt. We asked: what if your emotions could become music? Inspired by the therapeutic power of music and the rise of generative AI, we built Music Agent to transform daily experiences into personalized songs. It's journaling reimagined — instead of writing paragraphs, you have a conversation, and leave with a soundtrack for your day. What it does Music Agent is an AI-powered companion that: Listens to your day — Through natural conversation, it asks about your mood, key moments, and how you're feeling Understands your vibe — It picks up on emotions, suggests genres that match, and crafts lyrics inspired by your story Creates your song — Using ElevenLabs' music generation API, it produces a unique, personalized track just for you Lets you iterate — Don't love it? Remix it — make it more upbeat, more chill, or try a different style Every song is one-of-a-kind, generated in real-time based on your experience. How we built it Tech Stack: Frontend: Next.js 16, React 19, Tailwind CSS v4 AI Chat: Google Gemini 2.0 Flash via Vercel AI SDK Music Generation: ElevenLabs Music API Language: TypeScript Architecture: Conversational AI gathers context through natural dialogue Tool-calling pattern allows the LLM to trigger music generation when ready Streaming responses for real-time chat experience Audio files stored and served for instant playback Challenges we ran into API Rate Limits — Balancing multiple AI services (chat + music) while staying within quotas required careful error handling and graceful degradation Music Generation Latency — Generating a full song takes time; we had to design UX that sets expectations and keeps users engaged during the wait Prompt Engineering — Translating emotional descriptions into music prompts that produce good songs required iteration and experimentation Streaming Complexity — Coordinating real-time chat streaming with tool calls and long-running music generation was technically challenging with the AI SDK Accomplishments that we're proud of End-to-end experience — From "How was your day?" to playing your custom song in under 2 minutes Natural conversation flow — The AI feels like talking to a friend, not filling out a form Beautiful UI — A polished, dark-themed interface that feels premium and delightful to use Real music — Not MIDI beeps — actual, listenable songs with vocals, instruments, and production quality What we learned AI orchestration is hard — Coordinating multiple AI services (LLM + music generation) requires careful state management and error handling UX matters as much as tech — The best AI is useless if users don't understand what's happening; loading states and feedback are crucial Prompt design is an art — Small changes in how you describe mood or genre dramatically affect output quality Generative AI is magical — Seeing a personal song created from a casual conversation never gets old What's next for MUSIC AGENT Spotify/Apple Music integration — Save your songs directly to playlists Voice input — Talk about your day instead of typing Song history — Build a musical journal over time, listen back to how you felt on any day Social sharing — Share your daily songs with friends More music styles — Expand genre options, add sound effects, and improve generation quality Mobile app — Native iOS/Android experience for on-the-go journaling
Built With
- elevenlabs
- gemini
- nextjs
- typescript
Log in or sign up for Devpost to join the conversation.