Inspiration

We've all had those moments where words can't capture how we feel--but music does. And that's how we came up with Mood Music. What if your face could pick the perfect song for your mood? We wanted to build something fun and expressive. A tool that brings emotion detection and music in a way that feels personal.

What it does it do

Mood Music takes user uploads, whether the photo has been taken now or earlier, analyzes your facial expression, and matches your detected emotion with a song that fits that mood. Here's how it works:

  1. The user uploads a .jpg image in the GUI.
  2. The image is analyzed using DeepFace to detect your dominant emotion.
  3. The app displays your emotions and shows a button to play a song curated to match your mood.
  4. You click the button, it opens the track on YouTube, and you vibe!

How we built it

  • Frontend (GUI): We used Python's tkinter library to create an intuitive interface that lets users upload .jpg images.
  • Emotion Detection: Uploaded images are analyzed using DeepFace to extract the user's dominant emotion.
  • Music Mapping: We created a custom dictionary that maps each emotion (happy, sad, surprised) to a relevant YouTube song.

Challenges we ran into

  • Installing Deepface was tricky because there were restricted remote servers (UVA GPU cluster) that made downloading TensorFlow and dependencies difficult. We went to mentors after five hours of trying to download dependencies only for them to recommend starting a new environment
  • Getting DeepFace and GUI scripts to communicate cleaning required restructuring of our code into components.
  • The speed of the GUI processing after analysis took a while.

Accomplishments that we're proud of

  • Managing to download DeepFace and TensorFlow dependencies on the CS GPU server.
  • Connecting the front end to the back end.
  • Running the back end on the server and using the GUI front-end locally.
  • Being able to push through with this project with my teammates!

What we learned

  • A big thing we learned is GRIT by working together for 24 hours to make a project run.
  • We don't have to use an AI model to recognize facial expressions.
  • How to use front and back end.

What's next for Mood Music

We hope to incorporate AI that can detect facial expressions in real-time with your webcam and determine what songs to play. We wish to link Spotify curated mood playlists to have an extensive music selection

Built With

Share this project:

Updates