Inspiration

Early in the hackathon, we stopped by the Google table, and saw a pretty neat running implementation of facial detection (placing boxes around faces in a video feed). Later, we explored the Cloud Vision API, and found that it offered a rudimentary emotion detector. We'd known we wanted to use Spotify for something, and thought about the kinds of algorithms which Spotify uses to improve and select music (also taking inspiration from Nike+ and its matching of running pace to bpm). We decided that our sphere would be "mood" music -- matching recommended songs to someone based on their mood (as determined by their expression). From there, the tools became obvious, and we got to work.

What it does

The app uses a webcam to take photographs at 10-15 second intervals of a person at a computer, who is listening to music. These photographs are then quickly processed with Google Cloud Vision to determine the most likely emotion (and then deleted). The application combines this data with the user's current listening choices on Spotify, and then creates or expands "mood playlists" based on the mood associated with each track.

How we built it

We worked at first on an Android implementation, incorporating Android Spotify API, Android/Java Cloud Vision API, and the Android Camera. This proved problematic, as we ran into many hiccups in terms of authenticating APIs and harnessing the Android camera. Partway through the hackathon, we changed course, deciding to ditch our Android code and instead pursue a Python 3 implementation.

Challenges we ran into

We spent many hours knocking our heads against the proverbial wall while attempting to sort out safe authentication for Cloud Vision on Android (including figuring out which authentication methods were deprecated), authentication for Spotify API, and Android camera access. Ultimately, we realized that we essentially had made no tangible progress, and decided to focus on producing a product that we were confident would be functional and show off our idea.

Accomplishments that we're proud of

We're proud of rapidly finding an idea based on powerful APIs that came together well. We're also proud of the possible extensions--this is a platform which can be expanded in a number of ways, including improved facial detection (in order to figure out who is listening, and possibly choose songs for a particular combination of friends), as well as ML techniques to better analyze data and predict song matches.

What we learned

We learned that in life, as in hacking, there are times when problems can seem absolutely insurmountable, and when effort seems to drain away without results. However, we also learned that even in these times, there is often a path (maybe not the shortest, the most glamorous, or the most robust), that can lead to progress and learning.

What's next for face-the-music

We want to focus on creating a more user-friendly interface, first--so that the app can actually be distributed in a useful way. We'd give Android another shot, possibly with a great deal of background work to understand where we failed. We also want to expand our horizons a bit, with a wide variety of tools that exist (such as Google ML tools, or data-gathering tools such as location services which could expand our model even further).

Built With

Share this project:
×

Updates