Personal experience of writing music from one of our team members. He mentioned that one of the harder things to do when composing new music is creating transitions between two pieces or melodies

What it does

Our solution is able to with a neural network, fuse together two midi files and create a transition between them. The autoencoder is trained on a subset of the "Million songs Datasets"

How we built it

We built this mostly in python and pytorch, we also built a user friendly front end for interacting with the tool. We built two models, an autoencoder which involves 1 dimensional convolutional layers and an LSTM.

Challenges we ran into

Neural networks are hard to train in general. With the limited time we despite the crunch hours couldn't create a nice model.

Accomplishments that we're proud of

We managed to get a fully functioning product, although the performance might be sub optimal

What we learned

We we're naive to think that we could train a good network in just 24h, we should have implemented other more simplistic solutions as a backup

What's next for A musical autocompleter

Improve the convergenceof the training of both models, utilize cloud GPU to speed up the training process and also maybe change the LSTM architecture to reduce vanishing gradients.

Built With

Share this project: