Every time one of us sat down to listen to some music, whether it be to relax, or to go exercise, we couldn't find the right music. We decided to make an app that would recommend the music that you would like to listen to, based on your facial expressions and mood.

What it does

By reading your facial expressions and determining the percentage that someone is smiling, it will determine one of three moods: happy, sad, or in the middle. Based on which mood someone is in, it'll recommend a song.

How we built it

We used firebase's beta ML kit, and android studio.

Challenges we ran into

We encountered many errors that wouldn't let us run our app, and through trial and error, managed to work it out into an app that can recognize the face and tell the percentage that someone is smiling.

Accomplishments that we're proud of

We're proud of the fact that on our first hackathon as a team, we managed to create an app that can recognize facial expressions, and roughly determine the mood that someone is in.

What we learned

The next time we use the firebase ML kit, we'll be able to finish our project in half the time. We learned much of the api, and this is the first time we've really used firebase, as well.

What's next for Exprezzo

We plan to be able to play music out loud, personalize it and recognize faces rather than just detect them, and have a larger variety of genres.

Built With

Share this project: