One of the LA Hacks prizes was for the most creative utilization of MashApe and one of the APIs it hosts, so I thought, "Why not use one of their popular facial recognition engines to create personalized musical playlists?" And thus, MusicFace was born. MusicFace uses several APIs to generate a Spotify playlist of songs it thinks you might like. It's a quick and easy way to get your jam on and maybe even discover new music in the process.
How it works ♬
MusicFace uses your browser's webcam to take a picture of your face. It then submits a request to Face++ to parse the picture for faces and facial heuristics, extracting information such as age, gender, ethnicity, and whether or not you're smiling. Our algorithm then assigns a mood rating based on that information and generates a relevant Spotify playlist.
Challenges we ran into ♬
Even all of the RedBull in the hackathon couldn't keep Philip, our main programmer, awake after 24 hours of straight coding, debugging, and refactoring. Major props to him.
Accomplishments that we're proud of ♬
The project manages to make create harmony out of several very different APIs. Getting them to work together was probably the most challenging part of the project.
What we've learned ♬
CSS is a pain in the behind to work with. APIs look great by themselves but together are a pain to deal with.
What's next for MusicFace ♬
Stronger mood-matching and other facial heuristics used to determine playlists. Implement puppies.
APIs used ♬