Two of our team members are color blind, and they struggle to interpret the emotional context of situations, so we developed Synthesthesia to bring color to them through music.

What it does

We developed Synthesthesia to assist color blind individuals with both color identification and emotional conveyance through distinctive audio feedbacks. Synthesthesia’s raspberry pi camera takes the average RGB color values of an image every few seconds and translates the data into synthesized audio files that are reflective of the emotion of the image. It features a precomposed mode as well as a custom synthesis to aid in classification of colors as well as their emotional context, and it also utilizes various environmental sensors to identify the context of the situation.

How we built it

We built it with soldering iron, raspberry pi, buttons, LED, 3D printed housing, rust programming language, drilling, tapping, and music composition software.

Challenges we ran into

We underestimated how difficult and time-consuming it would be to synthesize our various audio files. It was also difficult to calculate the average RGB color values of an image.

Accomplishments that we're proud of

Our group had several accomplishments over the course of the weekend. We successfully got the sensor to detect color and we were able to implement two different functions of the device. The device can detect color and output sounds to give an idea of the sound itself; while, the other function detects color and shows the emotion found in that color.

What we learned

We learned that each one of us needs to sometimes take a step back and look at things from another’s prospective. We practiced this by working together as a team with half the team being colorblind and the other half not.

What's next for Synthesthesia

Our next goal would be to add a higher degree of customability by adding a wider variety of sound, allowing customers to add music themselves, and the ability to change the appearance of the device.

Built With

Share this project: