About Moodify
Our Inspiration ✨
Music is deeply personal, isn't it? We've always found ourselves turning to music during specific emotional moments. Sometimes, when we're feeling happy and exuberant, we want songs that amplify that joy, making us want to dance around the room. Other times, when feeling sad or reflective, we might seek out music that understands that feeling, offering comfort or a space for contemplation. Maybe we want music to match our mood, or maybe we want it to help shift our mood.
The challenge we often faced was finding the right music for that specific, nuanced feeling right now. Sifting through playlists or relying on generic mood categories often missed the mark. Plus, we love discovering new music, but finding new tracks that perfectly fit a transient emotional state felt like searching for a needle in a haystack.
This sparked the idea for Moodify: a tool that could listen to how we feel, expressed in our own words, and instantly curate a playlist blending familiar favorites and fresh discoveries perfectly tuned to that emotion. We wanted to create a more intuitive, empathetic way to connect with music.
How We Built It ⚙️
Moodify bridges natural language and music discovery through a multi-step process:
- Understanding Your Mood: When you describe how you're feeling, we send this text to Google's powerful Gemini API. It performs nuanced sentiment analysis, identifying not just positive or negative feelings, but a spectrum of specific emotions (like joy, sadness, calmness, excitement, etc.) and their intensity.
- Mapping Emotions to Music: We developed a system to translate these identified emotions into relevant musical characteristics. This involves mapping emotions to descriptive tags (like "upbeat", "melancholy", "energetic", "chill") commonly associated with certain sounds.
- Connecting with Spotify: Using the Spotify Web API, we dive into the musical landscape:
- We fetch a sample of tracks from your Spotify library (Saved Tracks and recent playlists) to understand your taste.
- We leverage the Last.fm API (using asynchronous calls for speed) to gather descriptive tags for these library tracks.
- We filter your tagged library tracks, scoring them based on how well their tags align with the mood derived from your input text. This ensures the playlist includes songs you already love that fit the vibe.
- To find new music, we construct targeted queries for Spotify Search. These queries combine relevant mood tags and genres (inferred from your top artists) to find fresh tracks likely to match the desired feeling.
- Curating the Playlist: Finally, we combine the best-matching tracks from your library with the newly discovered recommendations. We apply diversity rules (avoiding too many songs from the same artist or album) and aim for a good mix (around 40-50% familiar, 50-60% new) before creating a brand new playlist directly in your Spotify account.
What We Learned 🧠
This project was an incredible learning journey! We significantly deepened our skills in:
- API Integration: Working with multiple complex APIs (Gemini, Spotify, Last.fm) and handling their different authentication methods, data structures, and rate limits.
- Data Mapping: Translating abstract concepts (emotions) into concrete data points (music tags, search queries) required iterative refinement.
- Asynchronous Programming: Implementing asynchronous API calls (especially for Last.fm tagging) was crucial for optimizing performance and keeping the user experience responsive.
- Problem Solving & Adaptation: Debugging API errors (like those tricky
403s!) and working around limitations or deprecations demanded persistent troubleshooting and forced us to pivot our technical strategy multiple times. - System Design: Thinking about the flow of data, caching strategies, and how different components interact to achieve the final goal.
Challenges We Faced 🚧
It wasn't always smooth sailing! Key challenges included:
- API Limitations & Deprecations: Our initial plans heavily involved Spotify's Recommendations endpoint and detailed Audio Features. Discovering these were deprecated or restricted for newer apps required a major strategic shift towards relying more on search and Last.fm tags, fundamentally changing our approach.
- Accuracy vs. Speed: Constantly balancing the desire for deep analysis (more API calls, richer data) against the need for a reasonably fast response time. Asynchronous operations helped, but it remained a trade-off.
- Subjectivity of Mood: Music and emotion are inherently subjective. Mapping text sentiment to universally agreed-upon musical characteristics is complex. What sounds "happy" or "relaxed" varies between people, making perfect accuracy elusive but striving for strong relevance key.
- API Errors & Rate Limits: Handling intermittent errors, timeouts, and strict rate limits (especially Last.fm's) required robust error handling, retries (where appropriate), and careful sequencing of requests. Debugging vague errors like the persistent
403s demanded patience and eliminating variables. - Data Consistency: Ensuring data formats (like artist names being strings or dicts) were handled consistently between different API sources and internal processing steps.
Despite the hurdles, overcoming them led to a more resilient and functional application. Moodify is our attempt to make finding the right music, right now, a little bit easier and more magical.
Log in or sign up for Devpost to join the conversation.