I wanted to build an app for learning piano where I could see the notes I am playing at the same time as the notes that I am supposed to play. It would also be nice to generate unique sequences so that 1.) it wouldn't be too repetitive, and 2.) the instructor wouldn't have to hard-code too many unique sequences in the app. This was a step in that direction, as it built a sequencer that can record live notes from a computer keyboard (and soon MIDI keyboard) and regenerates specified sections on each time through the loop.
What it does
It is a looper and sequencer. Once you define a basic loop, you can select a part of the loop for the app to improvise on each round. For example, you could have a 3-bar theme and then the app will improvise the 4th bar each time it loops around. You can also play along with a keyboard (asdfghjkl;) to either record new notes, or visualize your notes against the current sequence. You can change the BPM while you are playing, the number of bars, and how random the improvisation should be.
How I built it
I used this as an opportunity to explore Vue 3.0 (still in beta) as well as D3.js, P5.js, Tone.js, and Magenta.js. The Vue app stores the raw state of the sequence as well as some user controls such as BPM and the dimensions of the window. It also allows another layer of variables to be "computed" based on the raw variables, so they can adapt to changes. D3 is useful to help map between pixel space and musical time and pitch. Tone.js is used to do all the actual playing of audio files, keeping things in sync. I used the MusicRNN to generate continuation sequences based on the previously entered sequence. P5 does all the drawing, and I used the WebGL mode for better performance when drawing notes in real-time.
Challenges I ran into
The first time I implemented the sequencer, I used D3.js to draw everything using SVG and transitions. When there were a lot of notes on the screen performance suffered. I switched to P5.js which allowed for smoother animations, and better user mouse and keyboard interaction.
I originally wanted to use the @magenta.js Recorder class to record MIDI input, but I actually needed more control to be able to play audio using Tone.js at the same time. There were some conflicts with using the latest Tone.js alongside the Tone.js included with the Magenta library.
I'm still dealing with a few bugs with initial lags, and getting MIDI to work seamlessly.
Accomplishments that I'm proud of
I'm proud of the app I built and also the ability to play notes on a keyboard and have them synced up to a real-time display while the rest of the loop is still playing. I have been visualizing an app like this in my head for months/years, but finally was able to make it happen.
What I learned
I learned all about drawing using P5, and how to sequence audio in Tone.js. Syncing visuals with audio and user interaction has a lot of nuance. I also learned how to combine all of these separate technologies in a single NPM application. While I'm proud of the app I built, I'm very excited for what I will be able to build next with everything that I've learned.
What's next for RoboLoop
The first thing I want to do is clean up and refactor the code as it got pretty messy toward the end of the hackathon. There are some bugs with lag when the loop starts to play initially that I also want to fix. Next, I want the ability to save individual layers of notes, and specify the instrument or SoundFont for each layer, so you could combine bass, melody, and drums. It would also be pretty cool to display the sheet music as loops are playing. Ultimately, I want to hook up pitch detection models as well to auto transcribe loops in real-time.