Inspiration
The idea for MoodTunes was born from a simple observation – music has the power to transform our emotions. Whether we're feeling down, overwhelmed, or celebratory, the right song can uplift our spirit or let us feel understood. We wanted to create a voice-driven companion that could detect how you're feeling and instantly play music that matches or enhances that mood. Inspired by mental health awareness and the intersection of AI and creativity, we envisioned an app that combines voice input, emotion analysis, and intelligent music curation – all in one seamless experience.
What it does
MoodTunes is an AI-powered web app that:
- Listens to your voice and understands your current mood using natural language.
- Uses Gemini API to detect emotional tone and generate contextual feedback.
- Curates and plays music tailored to your mood (e.g., calming, energetic, reflective).
- Optionally speaks back a motivational or soothing message using ElevenLabs' human-like AI voice.
- Offers free and premium usage tiers, tracked using local storage.
How I built it
We built MoodTunes as a React Single Page Application (SPA) using:
- Tailwind CSS for styling a minimal and responsive UI
- Web Speech API to capture and transcribe voice input
- Gemini API (Google AI) to analyze the transcribed text and determine mood
- ElevenLabs API (optional) to generate speech responses for motivation
- React Router for navigation between components
- LocalStorage for storing session preferences and access type (free vs. premium)
Key components:
Landing: Introduction and CTAMoodCapture: Captures and transcribes user inputMoodAnalysis: Sends data to Gemini and interprets resultsMusicPlayer: Plays suggested music based on mood
Challenges I ran into
- Setting up proxy communication with Gemini API in development environments (e.g., ECONNREFUSED errors).
- Handling variations in user voice input and ensuring consistent transcription.
- Managing rate limits and latency when calling AI APIs in real-time.
- Designing fallback flows for when APIs fail (e.g., hardcoded moods and music for offline/backup usage).
- Balancing between minimal UI and rich user feedback within the single-page format.
Accomplishments that I am proud of
- Creating a working end-to-end voice-driven experience in the hackathon timeframe primarily using Bolt.new
- Successfully integrating Google Gemini for mood understanding and ElevenLabs for expressive feedback.
- Making a user-friendly and calming UI that aligns with the emotional support theme.
- Implementing both a free tier and a premium simulation without backend complexity.
- Building a solution that could truly help users feel seen and supported.
What I learned
- How to integrate advanced LLM APIs (Gemini) into real-time frontend applications.
- How to work with browser-based voice input and ensure accuracy in transcription.
- The power of combining multiple AI tools to create emotionally intelligent applications.
- How small UI/UX tweaks (like color palettes, tone of messages, and audio feedback) can dramatically impact user experience.
What's next for MoodTunes – AI-Generated Music Based on Your Mood
- 🎧 Integrate Spotify/YouTube Music API for dynamic playlist generation.
- 🧠 Train a custom mood classifier to reduce reliance on third-party APIs.
- 🌐 Add user authentication and saved mood history across devices.
- 📱 Launch a mobile-friendly PWA version for broader reach.
- ❤️ Collaborate with mental health professionals to suggest scientifically backed audio therapies or meditations.
- 🔊 Improve accessibility with multi-language support and voice-only mode.
MoodTunes is more than just a music app – it's a step toward making tech empathetic.
Built With
- elevenlabs
- express.js
- gemini-api
- javascript
- lucide-react
- netlify
- react
- tailwind-css
- typescript

Log in or sign up for Devpost to join the conversation.