🎧 Moodsic: Emotion-Based Music Recommender
💡 Inspiration
Our team was inspired by the idea that music has the power to enhance or heal our emotions. Whether we're feeling down, excited, or anxious, music often becomes the go-to outlet for self-expression and comfort. We wanted to create a seamless experience where a user's mood could automatically influence the music they hear—without needing to search or type a single word. Moodsic was born from the desire to blend emotional intelligence with real-time technology to create a more intuitive, empathetic user experience.
🎯 What it does
Moodsic detects a user’s emotional state through facial expression analysis using a webcam and then recommends a song that matches or elevates their mood. It connects to the Spotify Web API to pull tracks from curated playlists that align with the detected emotion and optionally filters them by genre preferences.
🏗️ How we built it
- Frontend: We used HTML, CSS, and JavaScript to create a clean, responsive UI with live webcam integration.
- Backend: Built using Flask (Python) to handle routing, process emotion data, and interact with Spotify’s API.
- Emotion Detection: Implemented using OpenCV and a deep learning-based facial emotion recognition model.
- Spotify Integration: Used the Spotify Web API for authenticating, accessing playlists, and retrieving track details based on mood and genre.
🚧 Challenges we ran into
- Emotion detection accuracy: Lighting, face angles, and webcam quality affected detection reliability.
- Spotify API limitations: Handling authentication tokens and API rate limits required careful management.
- Real-time performance: Ensuring fast and smooth communication between the webcam input, backend processing, and Spotify's API.
- Cross-browser compatibility: Making sure webcam access and audio playback worked consistently across different browsers and devices.
🏆 Accomplishments that we're proud of
- Successfully integrated emotion detection with real-time song recommendation
- Built a fully functional app that bridges AI, web development, and music in a meaningful way
- Created a responsive and user-friendly interface
- Worked efficiently as a team and completed the project within the deadline
📚 What we learned
- How to combine machine learning with real-time web applications
- How to use the Spotify API for search, authentication, and data parsing
- How to manage frontend-backend communication effectively
- The importance of user experience design, especially when dealing with emotion-sensitive features
- How to collaborate under pressure and divide work strategically
🚀 What's next for Moodsic
- Expanding the emotion detection model to support more nuanced emotional states
- Personalizing recommendations further using users’ listening history or mood trends
- Adding support for mobile browsers and native apps
- Implementing voice-based emotion input for accessibility
- Exploring integration with other streaming platforms beyond Spotify

Log in or sign up for Devpost to join the conversation.