Inspiration
After a long day of locking in, I sat down with my Red Bull, trying to unwind. I noticed how much my music choices affected my mood. When I was feeling down, a sad song seemed to make it worse, and when I was upbeat, the right song could elevate my day even more. That got me thinking: why not create an app that could help people discover music that matches their emotions in real time? I wanted to combine technology and personal well-being in a way that was both fun and meaningful. By using facial recognition and AI to analyze mood, I could help people connect with music that truly resonated with them, making their day just a little bit better, one song at a time.
What it does
This app detects your mood using facial recognition and analyzes text input to suggest songs that match your emotional state. Whether you're feeling happy, sad, or somewhere in between, the app uses AI to identify your mood through a camera feed or by typing into a text box. It then generates personalized song recommendations from Spotify to fit your vibe. The goal is to create a seamless experience where music can enhance your emotional well-being, making it easier to connect with the right soundtrack for your day.
How we built it
I built the app using React, Flask, OpenCV, DeepFace, Hugging Face transformers, and the Spotify API. The front end was designed to resemble ChatGPT a little bit, making it feel like home for most of you. I used OpenCV and DeepFace to analyze facial expressions in real time and detect emotions. I integrated Hugging Face transformers using the j-hartmann model to process and extract the emotions of a user's message. Flask serves as the server, handling all the communication between the AI models. I also integrated the Spotify API to fetch personalized song recommendations based on the detected emotions, ensuring that the user’s experience is enhanced with mood-matching music. To achieve real-time updates, I used WebSockets to enable seamless communication between the front end and back end.
Challenges we ran into
One challenge was ensuring accurate emotion detection with OpenCV and DeepFace, especially in different lighting and angles. Integrating the Spotify API to match emotions with songs also took time to fine-tune. Managing WebSocket connections for smooth communication between the front end and back end was another hurdle.
Accomplishments that we're proud of
I’m proud of successfully integrating multiple AI tools like DeepFace for emotion detection and Hugging Face for text analysis, making the app react to both facial expressions and written input. The smooth interaction between the front end and backend using Flask and WebSocket was a major win. I’m also happy with how the music recommendations are matched to the user's mood using the Spotify API, creating a personalized experience.
What we learned
This project taught me how to develop both the frontend and backend, which gave me a deeper understanding of full-stack development. Additionally, it introduced me to AI tools like DeepFace for emotion detection and Hugging Face for analyzing text, expanding my knowledge in the AI field. I now feel more confident working with AI integration and developing complete systems.
What's next for Harmoniq
Next for Harmoniq, the focus is on refining emotion detection and expanding music recommendations through Spotify's API. Adding NLP for text-based emotion recognition and exploring audio-based analysis are also on the roadmap. A user feedback system will improve recommendations over time, and mobile app development is in the plans to make the experience more accessible. These updates will make Harmoniq a more personalized music assistant.
Built With
- css
- deepface
- flask
- html
- huggingface
- javascript
- opencv
- python
- react
- spotifyapi
- websockets
Log in or sign up for Devpost to join the conversation.