Inspiration

We're a team of music creators. We've worked with youngteam and collaborated with a large community across Discord, and some of our pieces have been awarded prizes and listened to by KennyBeats. One of the biggest issues we've seen in the field is the inability for music creators to casually create a beat that pairs well with their chord progressions or melodies. We wanted a product that we'd want to use ourselves, and began researching existing products. The innovative design of the Teenage Engineering OP-1 gave us an elegant end-result to target, and the Nsynth Super gave us great ides on what ways neural networks can be used to aid musicians. After understanding what these products offered, we began designing DrummerBoy: a drum machine which quickly and accurately produces complimentary drum beats for a given melody. Dylan had built and sold an abundance of modular synthesizers, and so we started on the ambitious journey of employing both machine learning and our experience with circuit design.

What it does

DrummerBoy takes in an audio file, which may entail a chord progression played on a piano or a melody strummed on a guitar. Regardless of what instruments make up the input, we'll process this file and analyze how we can match a drum beat to it.

How I built it

We attacked the hardware and software sides in parallel. On the software side, we first began by implementing the MusicVAE model, hoping to gain familiarity with Google's Magenta. After having a functional pipelines where we could not only sample from audio inputs but interpolate between two different inputs, we continued with DrumsRNN to evaluate how we could produce the drum beat to accompany the input melody. We looked at a multitude of factors such as BPM, the quantized version of the MIDI notes, etc. Using these factors, we produced a drum pattern using DrumsRNN and a model trained with the help of magenta's architecture to accompany the instrumentalist or artist as he/she/they play the beat. We used both Python & C to communicate with the Teensy and send signals corresponding to the MIDI file.

For the hardware, we decided to use a Raspberry Pi to run the model and communicated over serial with a Teensy 3.2 to generate and output audio. The Teensy Audio Library was perfect for loading and playing samples based on a MIDI file. The Teensy was connected via USB to the Raspberry Pi, and a python script sent bytes to the teensy as note signals. WAV drum samples were converted to bytestreams using wav2sketch and flashed onto the Teensy. These drum samples were then played using the AudioPlayMemory object inside of the Teensy Audio Library and output through the Teensy's built in DAC. Finally, this signal was fed through a 2.5mm jack into a speaker.

Challenges we ran into

We began this project on May 21, just two months after we returned home from Berkeley due to the COVID-19 pandemic. Throughout this entire process, we ran into an abundance of errors, but we'll point out two substantial hurdles we overcame on the software and hardware sides. On the software side, we ran into errors trying to implement MusicVAE, notably generating our own checkpoints, and automating the conversion of audio (wav) files to MIDI files. On the hardware side, we struggled with navigating the Teensy audio library and communicating with different subsystems, in this case the Raspberry Pi.

Accomplishments that we're proud of

We're so proud that we were able to really complete this drum machine! Our complete pipeline works, and it couldn't have been possible without immense effort from each of our team members. We learned how to combine our unique skillsets, ranging from music creation (shared) to circuit design, machine learning, and integration with the Raspberry Pi. With each issue that we ran into, we made sure to address it in a proper manner by discussing potential solutions with the team and doing adequate research before implementing our ideas in code.

What We learned

This was our first experience working with Magenta, and we gained so much familiarity with it. In addition, two of our members have experience with neural networks but have never employed it in a musical sense. Previously, we'd just focused on computer vision and NLP applications, never thinking we could combine our passions for music creation and ML together. Moving forward, we've gained so many ideas on fun ideas we want to use, particularly around DDSP. We've learned that there's so many interesting applications to work on with our skillsets, and our team is excited to work together to tackle these projects in addition to continuing to work on DrummerBoy.

What's next for DrummerBoy

Something we experimented with, but were ultimately unable to complete by this deadline, was the ability for the user to choose a drum beat from different genres and experiment with them. We ran into issues implementing this idea with DrumsRNN, and this is the next feature we want to offer with DrummerBoy. In addition, we're hoping to increase functionality on the hardware side. We couldn't finish our idea of allowing users to change fine features about each MIDI sound, something we believe can add value should we provide the option of choosing from different genres.

Share this project:

Updates