Inspiration

During the brainstorming phase of this project, as many of our team members come from music and arts backgrounds, we immediately aimed to combine visual, audio, and sensory aspects of art. Taking inspiration from the phenomenon of synesthesia, we then began to hypothesize how we could use code to visualize how our brain can neurologically sensualize music. As this hackathon aimed to implement machine learning with music, we then aimed to use p5.js and three.js to visualize this sensory perspective, while simultaneously using tone.js and magenta.js to design neural networks - capable of “learning” and reacting to music similar to how our neurons do.

This parallel between neural networks and our neurological reactions to music became our synapse to connect the music and machine learning aspects of this project. Actively participating in making music - whether in a group or individual setting - has been scientifically proven to boost executive brain function, to strengthen speech processing, as well as been related to improved memory and empathy potential.

When designing the visual aspect of Treblemaker in three.js, we then decided to partition the visual into three different “spheres” - similar to how music is processed within the auditory cortex, premotor cortex and superior temporal gyrus (auditory cortex). We then decided to create a particle simulator surrounding these spheres, simulating the electrophysiological sensation provided by music to sensory neurons.

Then, when implementing the machine learning aspect, we aimed to create an auditory complex which would be able to harmonize the tunes played by the user. In actualizing this aspect, we once again too took inspiration from synesthesia, as synesthetes, or people with this condition, are often able to see and mix the colors related to what they hear - this in turn enables synesthetes to easily “see” and produce harmonies; we used this additional aspect to train the RNN, much like a brain, to see these patterns and in turn, create harmonies to the player’s input.

What it does

Treblemaker is an immersive visual and auditory experience that combines interactive music creation with mesmerizing 3D graphics. Using the keyboard as a multi-instrumental toolbox, the player is able to concoct unique melodies with the touch of a finger. Drums are located on the left portion of the keyboard while the melody and bass can be found in the middle and right sections, respectively.

To elevate the user’s musical idea, Treblemaker complements the piece by adding some color in the background, using machine learning to determine which instruments and pitches will provide the best harmony.

Treblemaker can serve as a composer’s playground in which one can explore different concepts, moods, and styles while receiving fresh ideas and inspiration from Treblemaker’s musical twists. For non-musicians, Treblemaker’s soothing sounds and visuals simply provide a therapeutic environment to play around in. Furthermore, we hope that Treblemaker will introduce users to harmonies that they may not have observed before using our algorithm!

How we built it

Treblemaker was developed entirely using HTML, CSS, and JavaScript. The animation visualizer was built in the three.js library. This library gave us access to the WebGL renderer, which is a GPU-powered in-browser 3D visualizer. We utilized an accelerated Perlin Noise library to create modulations in three spherical objects on the screen. The intensity of these modulations was defined by a Tone.js Fast-Fourier Transform (FFT) object, with the leftmost object reacting more to melodic notes, and the rightmost reacting most to percussive drum hits. We also implemented particle systems which, upon each keypress, create particles which fly away from the spheres into the background fog. The radius of the particles is proportional to the current modulation of the parent object.

For the audio files, we implemented a dictionary of Tone.js Players which mapped each audio file to a specific key. Each of these Players was connected to the Tone.js FFT which controlled the modulation of the visualizer blobs. For the drum samples, we used a lofi drum sample kit from Cymatics.fm. However, for the bass and melody lines, we could not find suitable audio samples online, so one of our team members opted to record each of the lines on the electric guitar. This also enabled us to be more engaged with the music aspect of the hackathon as well!

To generate the AI audio, we implemented the MusicRNN model from the Majenta.js library. For melody generation, we used the basic_rnn spec, while for drum generation we used the drum_kit_rnn spec. We gave the network the notes most recently played by the user, and routed the output audio through a Tone.js Sampler object.

Our landing page was built using p5.js, with a start button leading to the main visualizer. This page allows the user to learn how to use the visualization tool before landing on the page with the visualization itself.

Challenges we ran into

One massive challenge we ran into was memory leakage. Our Three.js environment was based around an in-browser WEBGL renderer. This renderer, while GPU accelerated, still had limitations when rendering a large number of objects, which became apparent upon the inclusion of the particle system. This system was creating a memory leak, essentially making the renderer allocate memory to objects which no longer existed. To combat this, we looked into the proper disposal of Three.js objects, which allowed us to deallocate memory from the nonexistent pieces, taking a ton of stress off of the website.

Another challenge we ran into was in creating the visualizer blobs. For each of these objects, we had to figure out how to not only cause movement on the object's surface, but we also needed to decide on a value which would define that movement. To solve the issue of moving the surface of the object, we decided to assign an acceleration, velocity, and position to each of the vertices on the surface of the sphere. To force the sphere to return to normal, we also included a restoring "friction" force. Initially, we wanted to have the blobs move based on which type of instrument was being played (left corresponds to drums, center to melody, and right to bass). However, we quickly realized it would be difficult to visualize the AI player's playing with this mechanic. To solve this issue, we connected the blobs to the first 1/3 of a fast-fourier transform in Tone.js which represented the current audio context. Each blob was controlled by the average magnitude of the transform on an interval equal to 1/9 the length of the transform, mapped from 0 to 1.

Accomplishments that we're proud of

We’re proud of our animations for the drum, melody, and bass lines. We wanted to create morphing animations based on the audio, with greater changes to the animation when more music is played. This involved returning the animation back to a sphere when no audio was being played. To do so, we created acceleration and friction variables such that the acceleration of the movement was increased when audio was played and friction gradually brought the animation of the sphere back to normal when the player idled.

We’re also very proud of producing a working magenta.js/tensorflow.js machine learning model, capable of generating harmonies based on simple audio (MIDI) inputs.

What we learned

We learned how to use the three.js, tone.js, and magenta.js libraries. Nobody on our team had used any of these libraries before, so this project was an entirely new experience. However, thanks to past experience in p5.js and TensorFlow.js, we were able to learn how to utilize all of the libraries effectively, which in turn allowed us to develop a really interesting and unique final product.

We also learned a lot about music theory! In experimenting with chromatic scales, we were better able to understand the musical mechanisms behind Tone.js, as well as enhance our understanding of chord structures and harmonies when developing the magenta.js Music RNN.

Also, we'd like to extend a huge thank you to Rachel, Tero, and Stephanie for running the workshops for this hackathon. The workshops were packed with information that was essential in creating our final project, and we are super grateful to have been able to attend such interesting and useful presentations.

What's next for Treblemaker

Similar to Patatap, we hope to implement different instruments that can be flipped through when the spacebar is pressed. While changing the instruments, we also hope to change the mood and environment of the animations to match the sound of the instruments. In enabling this, we’d then be able to better encapsulate different music genres that our users would enjoy, including lo-fi, classical, and EDM sounds. This would in turn enable Treblemaker to make environments and sensory phenomena more enjoyable for all people to use.

We also hope to integrate our code into an environment build in Unreal - similar to how “3D audio” is growing in popularity with surround sound, we hope to be able to create 3D/VR environments that would enable our users to become even more immersed in their created music.

Built With

+ 1 more
Share this project:

Updates