Model source code

Dadabots SampleRNN

Summary Video

Video

Written description of our solution

We generate music in modern genres such as black metal, math rock, skate punk, beatbox, etc. To do this, we modified a text-to-speech synthesis architecture.

This early example of neural synthesis is a proof-of-concept for how machine learning can drive new types of music software. Creating music can be as simple as specifying a set of music influences on which a model trains. We demonstrate a method for generating albums that imitate bands in experimental music genres previously unrealized by traditional synthesis techniques (e.g. additive, subtractive, FM, granular, concatenative). Unlike MIDI and symbolic models, SampleRNN generates raw audio in the time domain. This requirement becomes increasingly important in modern music styles where timbre and space are used compositionally. Long developmental compositions with rapid transitions between sections are possible by using LSTM units and increasing the depth of the network beyond the number used for speech datasets. We are delighted by the unique characteristic artifacts of neural synthesis.

Read more:

Our NIPS 2017 paper Generating Black Metal and Math Rock: Beyond Bach, Beethoven, and Beatles

Our MUME 2018 paper Generating Albums with SampleRNN to Imitate Metal, Rock, and Punk Bands

Sample dataset produced by the model

Here is a raw dataset of audio, produced by our model, trained on beatboxing by UK champion beatboxer Reeps One, created with his permission.

Sample application for testing the model

Other deep learning models may be part of a user app. But in our case, the real-world application is the curated use of the generated audio in one's own music production process.

Perhaps the best example of this how Reeps ONE uses our neural-generated audio as part of his artistic narrative, and the "battle the AI" scene in the documentary we shot inside the anechoic chamber at Nokia Bell Labs. Watch this video to see.

Since our December 2017 paper dropped, we've received collaboration requests from numerous artists, including a UK champion beatboxer, a grammy-nominated band, a legendary breakcore artist, a talented Irish vocal artist, and several international metal and rock bands. Music created with this process will be released through record labels later this year and the next.

Documentation explaining how to deploy the model, how do deploy the demonstration of the model, and all supporting toolkits and programming languages.

Usage documentation is available in our github README

Detailed setup instructions are available here in our github wiki

In the News

The Outline

Metal Injection

Loudwire

theneedledrop

Recently we were interviewed by the Guardian and NPR.

Built With

Share this project:
×

Updates