Inspiration

In 1997, Professor Graham Harding from Aston University was urgently flown to Japan after an episode of the anime series "Pokemon" caused nearly 700 children to have seizures. Professor Harding developed a new set of rules that would allow for visual effects in animation without causing ill health effects... what a legend.

We felt inspired, and developed a few hacks with a Pokemon theme.

What it does

Part #1 - We developed a machine learning model that could, from brain activity read by a Muse EEG headset, predict whether or not you are watching Pokemon.

Part #2 - We developed machine learning models that designed Pokemon species. Firstly, we used a Deep Convolutional Generative Adversarial Neural Network (DCGAN) on a dataset of images of Pokemon, and the AI began to draw its own after a few hours of learning. Next, we introduced Long Short-Term Memory (LSTM) neural networks that would learn character sequence generation ('predictive writing') and fed them a Pokedex of Pokemon names and descriptions. The overall outcome were images of new Pokemon, who had names, and descriptions. The AI, though didn't make /too/ much sense, learnt to write not only full words but proper sentence structure too.

How we built it

Over 2550 statistical features are extracted from 0.25 and 0.5 second time windows run over the brainwaves, this is due to brainwaves being dynamic and temporal, yet we need some nice static data objects. These are simple data like min and max values of a window, and then more complex ones such as the Shannon Entropy and Log-Covariance Features etc.

Google Cloud Platform and the KERAS API handled most of the learning processes.

The DCGAN was developed by Alexandrej Karpathy and run on a GPU, Karpathy also developed the Char-RNN model used to create the character predictive LSTM.

Challenges we ran into

Brainwave data is notoriously hard to predict as the brain is so complex, a lot of trial and error and one all-nighter later and we finally cracked it. 94% bois.

The dataset after statistical extraction of only 8 minutes of brainwave data was nearly sixty megabytes of raw CSV data... this was majorly difficult to edit and save as we needed to correctly batch the data for LSTM neural networks, since they need to know how much data is in the previous hidden state for each node in the network.

Accomplishments that we're proud of

The current state of the art for reading emotions from brainwave data is 95% and we managed to score 94.07% in less than a day. With further work into ensemble learning, we could beat the state of the art and publish. Scientists = rekt question mark?

What we learned

Brainwave data is complicated as hell, it was really difficult to figure it out. But once it finally worked, it felt great! Now we have a good starting point for similar experiments in the future.

What's next for Pokemon Hacks

Further emotional analysis, deep ensemble learning, and publication. Gotta get them citations.

Share this project:

Updates