People change what they listen to based on their mood. Despite Spotify's pre-set mood-based playlists, we realized that everyone feels differently about specific songs. We wanted to create an application that automatically plays music based on the user's personal musical preferences and their current mood.
What it does
Moodify classifies songs into 4 emotions : Happy, Sad, Hype, and Calm using scikit-learn. Then we populate playlists for each of the four emotions and use Microsoft Azure's emotion recognition to play the playlist that most closely matches the user's current mood.
How we built it
For Emotion Recognition: We first used openCV library to capture a screenshot from a webcam. The screenshot then gets uploaded to AWS S3 platform and we retrieved the public link to access the image on the cloud. Then, we pass the public link to Azure's emotion API and figures out what is WE the strongest emotion. For populating playlist & machine learning: We used Spotify's audio feature, which provides valence, tempo, speechlessness, mode, loudness, and energy, to collect necessary information to classify a song. Then, we used scikit's machine learning algorithm to estimate the other song's classification. Then, based on the classification, we populated playlists.
Challenges we ran into
We ran into challenges of providing enough data to train the algorithm. We proposed supervised learning to increase the effectiveness but due to the time constraints, we kept our original approach. Moreover, it was hard to integrate each features into one final output.
Accomplishments that we're proud of
Trying out new technology stacks and libraries and finishing it on time.
What we learned
We learned how to use MongoDB, AWS, and how to use machine learning libraries.
What's next for Moodify
We want to implement supervised learning and would like to push the website into server rather than having it on local host for now.