Inspiration

Recent advances in deep neural networks has enabled algorithms to compose music that is comparable to music composed by humans. However, few algorithms allow the user to generate music with tunable parameters. The ability to tune properties of generated music will yield more practical benefits for aiding artists, filmmakers and composers in their creative tasks. Our goal is to build an end-to-end generative model that is capable of composing music conditioned with a specific mixture of musical style as a proof of concept.

What it does

DeepJ is a deep learning technology that acts as DJ. It can compose and mix music of different genres together. The website demos DeepJ's near real time music generation capabilities. DeepJ also works with Amazon Echo and is capable of playing music it composes in real time through Alexa skills.

How we built it

We used Pytorch to create a recurrent neural network (LSTM) and trained it on a corpus of existing music by famous composers. We use Docker containers to deploy our model to AWS.

Challenges we ran into

Deploying a deep learning system on production was challenging. Also, the training time it took to handle a corpus of MIDI music files was long, making it different to perform optimizations to the AI. The Alexa API also posed some challenges and it was difficult to link it up with the server.

Accomplishments that we're proud of

We successfully created a website and synthesized music on our Docker containers.

What we learned

  1. How to deploy a deep learning model to production
  2. How to optimize for speed on an AWS EC2 instance.
  3. How to synthesize MIDI files into WAV files.

What's next for DeepJ

The model we trained could have been better. The music DeepJ generated is not excellent and it could have been more musically plausible if we had more training time. The next steps for DeepJ is to improve the quality of its output and also to perform a full Alexa integration.

Share this project:

Updates