Our surroundings are often big factors in our mood, and we wanted to create an app that would take our environment and enhance it through music.

What it does

No more searching to find a song that fits your mood when you're out and about. Our app can take your picture and transform it into an auditory experience, generating a song or playlist that suits your environment.

How we built it

We wanted to try to use Microsoft Azure's machine learning API to be able to extract words out of the picture. Then we passed this information into Microsoft's cognitive vision to try and find an emotion associated with the word. After we found the perfect emotion to describe the mood in the picture, we used Spotify to play a song or playlist to bring the mood in the picture to life!

Challenges we ran into

Initially, we wanted to train a machine learning model to be able to extract an emotion out of the words associated with the picture. However, we had trouble finding a suitable dataset, so instead, we generated similar topics to the tags of the picture in order to map them to genre or mood of music. Once we got this to work, our Python backend was able to generate a Spotify link when we gave it the link to a image URL.

Accomplishments that we're proud of

We were able to use LDA for word association to generate words to represent each genre of music. This model helped find a song to help us properly represent the picture.

What's next for Surround Sound

Because of the time crunch at CalHacks, we were unable to connect all the components of Surround Sound together to get a more seamless application for our backend algorithm. However, we plan to be able to allow our app to take pictures and upload them so that receiving music is easy and convenient to do when our users are out and about, and support a software that analyzes images to create a musically-enhanced video out of the pictures, which we've already been able to already implement but have not added to our app yet.

Share this project: