Finding the music that truly represents someone's emotions

What it does

the program takes an image from the users webcam, analyzes it and detects its emotion, then plays music for that person based on the image taken.

How we built it

To build this project we used Python programming language to write our script, for the facial detection part we used OpenCV library to take pictures from the users webcam and store them in the file directory, then we used a program which use's tensor flow to detect emotion based on the image. We also prepared a folder of different music pieces which represented different emotions. We also used Pygame to design the user interface.

Challenges we ran into

This is the first hackathon that we all attended so it was very challenging for us. A lot of the Tensor flow scripts that we found for detecting emotion were based on training datasets that we didn't have access to, Also when using OpenCV we initially ran into a problem that the image would be too dark which made the emotion detection unable to detect the appropriate emotion. We faced a lot of difficulties when it came to connecting all different parts of our code together and unfortunately the end file does not open completely.

Accomplishments that we're proud of

We are very proud that we were able to work in a team environment as a group and work under stressful conditions.

What we learned

Even though we didn't use any API's we learned a lot about them since we were initially planning on using them, We also learned a lot about famous python libraries such as Tensor Flow, OpenCV, and, Pygame.

What's next for EMusic

We hope to fix the errors in our program and add more music pieces in the future. We also hope to connect this app to the users Spotify account so we can create a playlist that the user can keep and take a look back in the future.

Built With

Share this project: