Loading Screen upon capturing an image
List of generated songs
List of generated songs continued
Selected song, displays the album cover and redirection to Spotify URL of the song
Recommended Songs 1
Recommended Songs 2
imusi is an innovative full-stack application that analyzes several characteristics of pictures in order to find a relevant song from Spotify that fits the mood of the user's environment.
The quintessential image of a college student is a person with their headphones plugged in, walking across campus with their eyes glued to their phone. In a place where technology has become so ubiquitous, it's so easy to fall into our own little online world. Our group realized the growing addiction to both technology and music in our lives, so we set out to create a product that utilizes image recognition technology in a fun, musical direction. By creating imusi, we hope that users learn to embrace the moment--not solely by capturing the visual representation of their lives, but also by feeling the essence of the environment as indicated by the app's selection of music. In this regard, we encourage people to use technology to interact more with their surroundings.
What it does do
imusi essentially functions as a self-serve Snapchat, where the user takes a picture, then the program analyzes the features of the image to quantize the specific aspects of music that are best represented in the picture. For example, a more vibrant image would yield a higher sentiment value in the music, and contribute to the overall description of which genre the image fits in. To extract the features of an image, imusi uses the Clarifai API and the OpenCV library. It uses Clarifai to determine the main concepts of the image, which is factored into determining the atmosphere of the image. Moreover, imusi uses the OpenCV library to complement the results of Clarifai. Using OpenCV, imusi calculates some raw features of the image, such as color temperature. Each image that the user takes generates up to twenty songs from across different Spotify genres that fit the atmosphere and mood of the picture's contents.
Challenges we ran into
One of the biggest challenges early on was deciding how to quantitively measure and map music so that we could compare the physical contents and characteristics of the image to a song. We addressed this issue by breaking down the different components of songs based on the Spotify API (i.e. speechiness and energy) and assigning physical traits of pictures (i.e. brightness or semantic contents) to each component of a song. The algorithm is then able to compare the vector of picture traits to the vector of song traits and return songs that are more similar to the traits given by the picture. Another challenge was our front-end and back-end communication. Our front-end, a mobile app written in React Native and a web UI written in Python, communicated with our backend through HTTP requests. The significant data that was being transferred was the raw bytes of the image. This proved challenging as we had to encode and decode the data into compatible formats. Furthermore, we were limited by the constraints of building our app through Expo, as opposed to ejecting into a mobile device. This not only reduced the number available libraries that we could use, but also indirectly increased the time it took for us to integrate our features.
Accomplishments that we're proud of
We are proud that we turned our idea into an actual product that people can use and enjoy. We believe that there are ways to integrate our application for other tasks due to the nature of its flexibility. Youtubers can use the app the generate relevant music for their videos instantly, without having to spend too much time manually perusing through songs. If someone doesn't have any playlists or is in need of new music, imusi is capable of generating quality music in seconds. With imusi, the only limit is your imagination. We strongly believe that imusi will impact the way people observe their surroundings, and change the way we reflect on our experiences due to the incorporation of music.
What we learned
We learned how to run a backend server using Flask, how to deploy a web server using Heroku, how to communicate between the front-end and the back-end of the application, how to create a well-designed UI, and much more.