Creating music is a time intensive process that requires a lot of experimentation, creative thinking and inspiration and many people are now exploring the role of machine learning as a tool in this creative process. By using machine learning, composers and musicians could extend (not replace!) their creative processes and help them create interesting art.
For this final project, I wanted to see if a model could be trained to generate new music reminiscent of the Pokémon game series. I used Google Magenta's polyphony-rnn, a Recurrent Neural Network (RNN) with cells implementing Long-Short-Term-Memory (LSTM) to improve the RNN’s ability to learn longer-term structures common in music. This model was trained on 14 hours of Pokémon music from the GBA/NDS era games (FRLG, RSE, DPPt, HGSS, BW) and the results were validated using cross-entropy loss on held out data. The final model has an accuracy of ~70% and loss While these measures of accuracy do a good job of describing the model's performance for strict classification models they don't necessary translate well to whether or not the goal of creating Pokémon music is being generated. This is something listeners would have to decide so go have a listen :)