Inspiration
Keyboards, among other inputs, have made computer interfaces convenient and easy for decades, but is it possible that we make these interfaces more convenient? More importantly, can we develop computer interfaces to become more accessible? Unfortunately, certain complications can make everyday computer interfaces more challenging for some. Certain impairments and conditions can interfere with some users' ability to interact with keyboards, toggles, and other computer inputs. While medical solutions vary, developing research into a budding new technology, Brain Computer Interfaces (BCI), seem to pose a very unique solution to our problem. In this project, we aim to contribute to this growing technology and create broader computer accessibility with our product, SignalBreaker.
SignalBreaker is a software product that aims to use already accessible EEG equipment to create a simple, but effective BCI, granting users the ability to navigate interfaces using only their brainwaves. By utilizing modern AI technology, SignalBreaker rushes to map specific user mental states with real-time inputs.
What it does
Our product takes in EEG (electroencephalogram) data from a commercially available EEG, the Muse 2 headset, and learns to map specific user states to real-time directional input. This allows users to interact with an interface using practiced brain activity. While these kinds of translations previously took large periods of data collection and limited machine learning capabilities to generate, our product seeks to use new advanced technology to optimize the training and efficiency of these models, all on an affordable EEG model.
How we built it
With limited scope, we honed in on developing a model that can predict a few states to allow for directional inputs (up, down, left, and right). The project pipeline showcases feeding real-time Muse data into a cached model stored in a local service, which returns real-time predictions.
Challenges we ran into
The biggest challenge of this project is working around the limited capabilities of the EEG technology we are utilizing. Standard EEGs usually track brain activity across more than 19 electrodes. However, our headband only uses 5. Additionally, a lack of information from critical regions can make mapping difficult when key brain regions are not recorded. Since most tracking is done on the forehead, closest to the prefrontal cortex, we had to adjust potential brain states to utilize this region to achieve the optimal data collection and signal differentiation. While this certainly constrains our ability to find meaningful data, it is still enough to develop a functioning BCI product.
Further complications arise from data collection. While research shows that MUSE 2 data can be used to distinguish between certain directional thought patterns, such research often requires large amounts of data and still limits itself in the scope of its outputs. Since time was limited, we aimed to achieve as much training as possible on a single user to optimize our categorization program's accuracy for distinguishing between different brain states.
Accomplishments that we're proud of
We were able to create a working model trained on data we produced ourselves using a Muse 2 EEG headband (acquired from the SCU department of psychology).
What we learned
Several of our members learned new skills, such as preprocessing data to reduce noise in data, creating a unique GUI, and even working with the Muse 2 headset to collect data.
What's next for SignalBreaker
We hope to veer into phone integration or even pair with AR and VR technology in the future.

Log in or sign up for Devpost to join the conversation.