The original inspiration was twofold: we wanted to be able to translate images to sound based on their brightness, and to use the Hilbert curve as an algorithm for translating each pixel in the two dimensional image to a frequency on a one dimensional line.

What it does

The user controls the input parameters by moving their mouse around the image, which generates frequencies depending on the average color of the pixels the mouse is over and near (to make the changing frequencies smoother as the mouse moves). The GUI also displays the current frequency’s waveform, the average color it is calculating frequencies based on, and whether or not the overtones of the frequency are based on major or minor chords.

How we built it

We started with working on calculating the colors of pixels and creating our own scaling algorithm to translate those averaged colors into frequencies. We then moved on to calculating overtones and creating multiple oscillators so that the sound would not just be a simple sine wave. Finally, we took on the GUI and added the frequency graph.

Challenges we ran into

Most of our team did not know JavaScript on Friday night, so much of our time was spent learning new languages and syntax. We also ended up learning a lot about music theory in terms of how to scale an initial frequency to produce overtones that sound consonant with the initial calculated frequency.

Accomplishments that we're proud of

Much of our time was spent improving the sound quality of the sine tones generated -- it went through many iterations to become what it is. Also Dah Math =D!!

What we learned

Lots of JavaScript and music theory! We also had lots of great discussions about various algorithms and curves.

What's next for Hilbert

Our long-term goal for this project was to use an AR component instead of using a given image to produce frequencies, so that we could map sound to a real-world environment.

Share this project: