Inspiration

Synesthesia, a neurological phenomenon occurring in a small minority of our world’s population allows people to essentially combine different senses and emotions with one another. Beautiful, eloquent, and completely mind boggling, Synesthesia has been an integral part of many brilliant composers, artists and those of the creative mind such as Wolfgang Amadeus Mozart, Hans Zimmer, and even Duke Ellington! These people have a unique ability that we hope to understand; the ability to see the color of music.

What it does

Using Jibo, a personal robot, we are able to take a photo of simply anything, and based on the image coloration, Jibo internalizes the data, and calls upon a song based on the dominant color of the image. If an image contains majority red, Jibo plays a rock song, or heavy metal, depending on the depth of coloration to fit the “mood”. In the same way people with Synesthesia function, Jibo compares images to music, and is able to determine which track best fits the situation. Jibo also has the functionality to look at a person’s face, and depending on their mood, can play a song befitting their facial expression. Jibo himself listens to voice commands, and can take an image, either of a person’s face, or of an image on command.

How we built it

Utilizing Jibo’s SDK, we were able to incorporate multiple different API’s including Spotify, Microsoft Computer Vision, rapidapi, last.fm, and imgur to create a cohesive project that allows us to capture an image and interpret it. Using Atom, our text editor/IDE, we created all the necessary trees and functions for the program to work, and while no hardware work was done, because Jibo had many built in features, incorporating new features was the biggest task we were able to accomplish.

Challenges we ran into

One of the biggest challenges was locally accessing images on Jibo and transferring the image into the Microsoft computer vision API in a form that was compatible. Another huge source of difficulty was coding in words for Jibo (i.e. the rules), for him to speak, and for him to understand us speaking.

Accomplishments that we're proud of

  • Successful integration with Microsoft Computer Vision
  • Ability to interpret user's emotions through both their facial appearance and their art
  • Our cohesive teamwork which allowed for the merging of our passions towards one goal

What we learned

We all had to pick up the Jibo SDK on the fly and learn to maneuver past the bugs and various other difficulties that arose as a result of the SDK still being in beta. We learned personally of scrapping an idea for something new, and to sometimes understand that while hardships come, success is through finding something that everyone in the team is passionate about and using that passion to fuel innovation and persevere.

What's next for Color of Music

Hopefully, once Jibo is implemented into households, it can be used for people when their moods falter, and perhaps to cheer people up. By recognizing facial patterns, and by examining a scene, Jibo will be able to choose an appropriate moment to play the right song for the consumer. Hopefully, through Jibo, people will be able to see the color of music. Something we wanted but did not have time to implement was taking in the spectrum of colors and figuring out the emotion from the densities or RGB values rather than just the dominant color. Some next steps for our app could be implementing the color to mood to song transition through a deep neural network that can be trained and will be able to predict more accurately a song suited for the user.

Built With

Share this project:
×

Updates