Music reflects the mood of the composers and could affect the mood of the listeners. We want to develop the interactive nature of listening music to an app.

What it does

It takes picture of the users and detects their facial expression. Then it plays corresponding music in response to the mood reflected from the users' facial expression.

How we built it

We wrote python script which captures the image of the user and uses the image obtained as input to the Microsoft's Emotion detector API (project oxford) to get the emotion of the person and play music based on the mood of the person. Based on the user's liking of the music played by the app we either repeat/stop/change the music.

And one of us produced music tracks through ableton live.

Challenges we ran into

It was a challenge to learn so much python language. Also it was a challenge to make music of different styles and structures to fit different moods.

Accomplishments that we're proud of

The interactive feature of the app that connects users' mood to the music we play for them. Also the music we play are all made by our team.

What we learned

API. Combining music production with coding.

What's next for Emoji Tune-in

Detect the music tastes of users from their facial reaction to the previous music we played. And then Emoji Tune-in can play music according to users' tastes.

Share this project:

Updates