Given the growing importance of convenience in modern applications, we thought making a music player that automatically chooses songs based on the emotion conveyed through the facial expression of the user would make listening to music easier in a car.
What it does
Our music player uses a AI to analyze the facial expression of the user to determine their emotion, then accordingly chooses songs to play.
How we built it
We built upon the given API from Octave Group and implemented the facial emotion recognition using Google Vision API.
Challenges we ran into
We had difficulty finding a way to determine which genre of songs to play given each emotion. We also had difficulty making the front-end, and we ended up scrapping the original idea of a Material design UI.
Accomplishments that we are proud of
We trained our AI ourselves using a set of training data that we have collected ourselves.
What we learned
We learned how to work with different API's and train an AI.
What's next for aMuseMe
We plan to implement constant facial tracking for real-time update of the music playlist. We also intend to implement a liking feature where users can like their songs and these songs will be played more often according to their moods.