How many times have you been listening to your playlist, but the music just didn’t feel right? Countless times for us. It’s no secret that people like to listen to genres of music based on - you know, the mood, the atmosphere, the “ambience”. We wanted to create an application with music curated to your mood and emotion. Also, we wanted to create an application using AI and ML!
What it does
Our project is a music player and recommender based on a user’s facial emotion, using facial recognition and AI technology. A user will simply take a photo using their webcam on our app and the facial recognition technology will detect their emotion and provide the corresponding playlist. Users will have the option to “change ambience”, and depending on their emotion, will be provided with a different playlist. A user can also log their emotion into a journal, and provide a description of their experience with the timestamp.
How we built it
Challenges we ran into
Integrating firebase and styling using CSS were challenges that we ran into as we built our project.
Accomplishments that we're proud of
Since this is our third hackathon, we’re proud that we were able to create a fully functional AI product using a language and framework different from what we were used to in the past. We also think our project has a lot of potential for additional features!
What we learned
We all learned different things, as some of us worked with face-api.js, some worked on building the music player, some on Firebase and the journals, and so on. We all improved our skills in React as well as resolve a bunch of merge conflicts.
What's next for ambience
We have a whole bunch of ideas, such as user authentication, uploading custom playlists, integration with Spotify or another music app, and more animations for a more personalized experience. Also, we think this app would be pretty well-received on mobile, so we'll work on that too.