Inspiration

Standing on top of the Empire State Building or strolling on the Whitehaven beach, we always want to find great musics that would accompany with these marvelous sceneries. This is why we developed PAIR. No matter if you are on a spectacular journey or trapped in your tedious daily routine, PAIR makes your life more enjoyable by playing THE song that best suits your surrounding.

Personal Acoustic-Inspiration Relator(PAIR) (just kidding)

What it does

In a nutshell, PAIR pairs you with musics that match your surrounding.

In more details, PAIR allows user to input an image by either importing from camera rolls or taking a new one, and returns a song whose mood best matches with the mood of the input image. If the user has Spotify installed, PAIR would automatically open Spotify and load the song, if not, it would open up the web version of Spotify in Safari.

How it works

Our App uses an backend that relies on Google Vision API. The Image from the user is uploaded to be labelled, then embedded to a word map that classify the emotions. The map will output a vector that correspond to our different type of emotions, then we select the one of the music that have the similar emotion vector from the data set. We then use Spotify api to load the song in Spotify app.

Challenges I ran into

The biggest challenge is how we inter-connect the entire process - from processing users’ input picture to returning the best match. Also, it was challenging to sync progress among all team members, especially when we delve into swift development using Xcode. A single modification in UI may cause huge discrepancy in source code of UI View Controller.

Accomplishments that I'm proud of

One of the greatest accomplishment is that we successfully adopted ML mechanisms in song matching. Also, we finally enabled “Pair” to classify plain nouns as emotion words that can be used in pairing of songs. Most importantly, we all learned a great bunch in software development and machine learning technology through this rewarding experience.

What I learned

Technically, we learned AI related API and server side programming. Using an unfamiliar language Swift, we learned how to use Google Cloud Vision Label and how to post http requests and get responses through swift. We were already familiar with front-end, but we consolidated our front-end programming skills by learning some small new features, such as granting access to cameras on an iPhone.

As college freshmen and first time participants, we also learnt how to build a large program as a team. Github collisions and merges, which was kind of a headache, gave us a taste of what app dev is like as a group. The process wasn’t smooth as we had messy branches at a time, but we figured that out and became better collaborators.

What's next for Pair

First, we can expand our database so that it can include more songs. Currently, we are using spotify. In the future, we can support more apps (music databases) and even let users to choose which apps (databases) they want to use.

Second, we can make some improvement in our emotion-deciding algorithm. Our AI algorithm which defines labels for images is very accurate, but the algorithm which decides which labels are linked to which emotions involves subjectivity. Of course, when we gather enough user feedback in the future, we can make the emotion-deciding algorithm more objective to most users.

Share this project:

Updates

posted an update

I mainly work on the backend stuff. I am responsible for writing the music recommendation algorithm, including choosing the appropriate music according to the labels from the image and fetching the data from Spotify's api. It was a pleasure working with my friends.

Log in or sign up for Devpost to join the conversation.

posted an update

I work on the front end, mainly developing the UI using Xcode and doing graphic design. It is a great pleasure to work with my dedicated teammates, all of whom have contributed a lot to the final project

Log in or sign up for Devpost to join the conversation.