Inspiration

Since the theme is about theater and related to music, we thought about music providers. We all thought about Spotify and how Spotify suggests music based on similar songs listeners like or certain situations the listener would like to hear. We decided to suggests music based on the listener's mood by their facial expression.

What it does

Users can upload an image or use their webcam and take a photo of their facial expression. After submitting their chosen photo, Moodify will calculate the user's expression and give song suggestions the user can listen to.

How we built it

We used HTML, CSS, JavaScript, Docker, Express, Linode, Google Vision API, Youtube Data API, and Spotify Web API.

Challenges we ran into

One challenge we faced were testing the Google Vision API on facial recognition and the likely hood of certain moods. Another challenge we had is applying Linode in our code.

Accomplishments that we're proud of

We're proud to use three different API's in our code and deploy our server to Linode.

What we learned

We learned that it's important to plan ahead and distribute the workload between every member.

What's next for Moodify

We would like to make Moodify more UI friendly and visually better. We would add in more songs or playlist suggestions with a wider range of the user's interests.

Built With

Share this project:

Updates