Create truly unique music, keyed after your biology.

What it does

Using an Emotiv Epoc EEG headset to analyze brainwaves, it maps certain audio transformations to certain parts of the brain. Activity in those parts of the brain causes shifts in the harmonics, frequency, etc. of the music that is generated on-the-fly.

How I built it

Utilizing a library called EmoKit to decrypt and read raw data off of the USB stick, I mapped that to each part of the brain and passed it through a client bridge that computes and sends OSC signals to my SuperCollider server. OSC (Open Sound Control) is similar to MIDI, and SuperCollider is a programming language for audio manipulation -- effectively what we did is turn an EEG into a music controller and write music-generation algorithms to utilize it.

Challenges I ran into

Learning OSC and SuperCollider -- none of us had ever used either. SuperCollider was especially challenging, as it is a non-blocking asynchronous programming language.

Accomplishments that I'm proud of

It works!

What I learned

A really neat network-based protocol for music manipulation peripherals. OSC was excellent, and since it was network-based, we were able to constantly send datastreams to all of our laptops from the device so that we could all perform testing with real, live data without having to schedule time with the computer under test.

What's next for biogen-music

In the future, we'd like to do some higher-level calibration and digital signal processing in order to be able to analyze alpha/beta/gamma brain activity. This would allow the wearer to not only generate biometrically unique sounds, but to control it consciously on a high-level.

Built With

  • emotiv-epoc
  • osc
  • python
  • supercollider
Share this project: