Inspiration
Building interfaces that use real-time EEG data can be life changing. However the high interpersonal variability makes it hard to train a single useful monolithic system that works well for every user. In our studies, we have been looking at real-time machine-learning algorithms, which can specialize to individual users. We felt that our algorithm would be a promising candidate for building personalized interfaces. Our goal for the hackathon was to test this hypothesis with a wearable EEG sensor.
What it does
Our system can learn to associate arbitrary actions---turning on a a light, moving a cursor on the screen---with arbitrary EEG signals captured by a sensor. It can learn in real-time using a few minutes of data. Moreover, it learns using a scalar reward provided by the user, removing the need for extensive data labelling. We demonstrate the learning capabilities of the system by controlling an on-screen character using the EEG signals generated by moving our eyebrows or clenching our jaw. The signal is captured using a Muse headband and transmitted to our program via bluetooth.
How we built it
We implemented our algorithms in C++, and build the GUI using SDL and Dear ImGui. Additionally, we used BrainFlow to read the data from a Muse headset in real-time. Our algorithms use a new type of event driven neural networks that are highly computationally efficient. The computational efficiency allows us to train the systems real-time in a 2–4 minutes.
Challenges we ran into
The biggest challenge was setting up the data pipeline. While using BrainFlow in python was straightforward, using the C++ api was more challenging due to the lack of documentation. Another big challenge was deciding how to best showcase the utility of our algorithms. In theory, our algorithms can learn to associate any sufficiently strong and distinct signal with any action. Given the time constraints, we picked one of the simplest facial movements as the signal---moving eyebrows---to control a character on screen.
Accomplishments that we're proud of
We were pleased to see that our algorithms, without any modifications, can learn to associate actions with different noisy EEG signals coming from different people, in real-time.
What we learned
Primarily that using real-time EEG data is a viable direction for building useful systems, and brain data is an excellent candidate for exploring the utility of online learning. At the same time, we learned the limitations of the Muse sensor. Going forward, we would like to try our algorithms with sensors that can capture more data.
What's next
The next challenge is training the system for longer, and learning more sophisticated behaviour, such as precisely controlling a cursor on the screen. This can have huge implications for making computers more accessible to people with disabilities. However, a simple sensor such as muse is unlikely to capture sufficient information reliably to make that possible. Similarly, our learning algorithms, while significantly better than prior deep RL algorithms, have room for improvements. The next challenge would be to work on both fronts, to build a more useful prototype.
Log in or sign up for Devpost to join the conversation.