Inspiration
Music deeply influences emotions, and we wanted to automate music selection based on facial expressions, eliminating manual playlist selection.

What It Does?
When the Emotion-Based Music Player System (EBMPS) recognises a facial expression, it plays a song that corresponds to that emotion. For example, it plays cheerful music for happy and sorrowful music for sadness.

How We Built It?
We employed DeepFace for facial expression recognition, React for the front end, and Python for the back end. After taking a picture, the system uses Haarcascade to identify the face, examines the expression, and chooses a song from the playlist of corresponding emotions. Playback controls and a mood selection dropdown are features of the user interface.

Challenges We Faced

  • Since facial expressions vary quickly, a 3-second buffer is used to stabilise emotion detection.
  • Letting a song finish before starting a new one to provide seamless song transitions.
  • Enhancing real-time processing to play music and sense emotions with ease.

Accomplishments

  • Successfully combined a working music player with real-time facial emotion detection.
  • Composing a study report to record our discoveries.

What We Discovered

  • Hands-on experience using Howler.js for React audio playback and DeepFace for facial recognition.
  • Improved knowledge of real-time picture processing, music psychology, and AI-driven emotion recognition.

Future Enhancements

  • Development of mobile apps to increase accessibility.
  • Integration of third-party APIs (like Spotify and YouTube) to provide a wider range of music options.
  • Increased user customization choices, a larger music database, and better emotion detection accuracy.
  • Connectivity with wearable technology to track mood continuously.

This initiative enhances emotional well-being by fusing AI and music to produce a customised, automated listening experience.

Built With

Share this project:

Updates