Inspiration
Proof of concept to convert brainwaves to music. Neuroscientists are always looking for interesting ways to explore data for research and education. By being able to use electroencephalography (EEG) signals recorded from the brain and converting these signals into music we can look for patterns in signals in the same way that you would look for patterns in music analysis algorithms. In addition to this, providing ways to show patterns of neural activity can be used to foster an interest in neural processes in people who may find neuroscience a much more interesting topic when music is involved!
What it does
Currently this is a proof of concept. The data, obtained from http://www.bnci-horizon-2020.eu/database/data-sets in dataset 24, contains EEG data from a driving simulation. This data was cleaned, preprocessed, and then converted into music signals.
How we built it
We isolated relevant EEG recordings from the dataset, applied a bandpass filter, computed the fast fourier transform for a set of windows that were applied across the signal for each electrode and then isolated the strongest contributing frequency in each bin/window. These frequencies were then classified into one of 61 piano keys and then concatenated to produce an audio file.
Challenges we ran into
These data were created and structured for research purposes with standard EEG processing tools, resulting in difficulties when isolating relevant signals. Filtering the signals and decomposing them into meaningful patterns that could be converted into sounds was a much more laborious task than expected because there is no standard for converting EEG signals to musical notes and all of the standard methods tend to be for clinical or research purposes to infer event-related signals in human behavior.
Accomplishments that we're proud of
Although we may not have created the first computerized musical genius, we are proud of producing a pipeline that can convert these signals from start to finish and produce some type of music. Being able to power through the mental blocks in the middle of the night after 30 hours of coding is also something to be proud of, despite the emotional rollercoasters.
What we learned
We learned about bandpass filtering, fourier transforms, and some basics of music theory. We also learned that there are many possibilities with processing EEG signals, which could lead to very different results in analysis, and this could inform our future endeavors with EEG signal processing. We also built our very first GUI with tkinter!
What's next for Neural Musician
We would like to work on combing the signals from all of the electrodes to produce more meaningful music. Detecting particular cognitive states is a common task in research, and by building a classifier of cognitive state, such as mood, we may be able to detect a current mood and produce music according to the mood of the subject. We would also like to package this kind of work in a better app - maybe with something like Firebase!
Log in or sign up for Devpost to join the conversation.