Inspiration

Mobility impairments rank as the second most prevalent disability in the U.S., affecting millions of lives. We were driven by the potential of Brain-Computer Interfaces (BCIs) to restore agency to those who have lost it—turning thought into action.

However, current BCIs are often "black boxes" that struggle with the noisy, chaotic nature of electrical brain signals. We didn't just want to build a classifier; we wanted to build a bridge. Inspired by the natural biological structure of the brain, we realized that treating electrodes as isolated sensors was wrong—the brain works through connectivity. This led us to explore Graph Attention Networks (GAT), 1D Residual Network, and specialized CNN (EEGNet) to capture not just what the brain is doing, but how different regions communicate during fine motor tasks.

What it does

NeuroBeat translates raw EEG signals into precise motor commands, specifically classifying fine motor movements into four distinct categories: Left Hand, Right Hand, Both Hands, and Both Feet. Unlike standard classifiers that look at a single snapshot, our system analyzes the temporal dynamics and spatial connectivity of 64 distinct brain channels, filters out physiological noise (muscle artifacts, eye blinks), and predicts the user's intent with high fidelity using a custom Graph Attention Network.

To demonstrate the impact that this technology can make on games and therapy, we utilize a Google Gemini API that translates these recognized limb movements into a rhythm game.

How we built it

Our development process was an evolution of three distinct architectures, each testing a different hypothesis about the brain:

The "Time-Series" Approach (1D ResNet): We first treated EEG signals like audio waveforms. We built a deep 1D Residual Network (4 stages, 8 residual blocks) to learn complex temporal filters, capturing the rhythm of movement over time. This model was too expensive.

The "Connectivity" Approach (MFF-GAT): We treated the 64 electrodes as nodes in a graph. We built a Multi-scale Feature Fusion (MFF) module to extract temporal features, then fed them into a Graph Attention Network (GAT). This allowed the model to dynamically learn which parts of the brain were "talking" to each other (correlation) during specific movements. This model was too slow.

The "Noise-Cancellation" Approach (EEGNet + Baseline): We implemented a specialized CNN (EEGNet) but added a crucial innovation: Subject-Specific Baseline Normalization, which helped combine the benefits from the first two.

Finally, we wrapped these models in a pipeline that utilized Gemini to analyze our confusion matrices and activation maps, providing a natural language summary of the model's performance.

Challenges we ran into

Our biggest hurdle was the "Flat Data" Trap. We started with a standard Random Forest baseline that treated the EEG data as flat vectors. It hit a hard ceiling at 32% accuracy because it ignored the spatial and temporal structure of the signal. Realizing that "flat" ML wasn't enough was our pivot point to Deep Learning.

Another major challenge was Noise. One person's "resting" brain activity often looked like another person's "moving" activity. This caused our early models to fluctuate. We solved this by implementing the dynamic baseline correction pipeline, which forced the model to learn the relative change in voltage rather than absolute values.

Accomplishments that we're proud of

The Pivot to Graphs: successfully implementing a complex MFF-GAT (Graph Attention) pipeline from scratch in PyTorch Geometric, moving beyond simple image-based CNNs.

Smart Normalization: Developing the logic to normalize patients against their own resting-state baselines, which significantly stabilized training.

Visual Reasoning: Integrating Gemini to explain our results. Seeing the model highlight the correct sensorimotor cortex regions during a "Right Hand" classification confirmed that our AI was learning real neurophysiology, not just overfitting noise.

The "Game" Controller: Taking the theoretical output of our model and conceptually mapping it to game inputs ("instead of keys"), proving the viability of this tech for real-time prosthetic control.

What we learned

We learned that preprocessing is king in neurotech. No amount of deep learning can fix dirty signal data; implementing robust bandpass filtering (0.5–50Hz) and artifact removal was just as important as the model architecture.

We also learned the difference between structural and functional connectivity. Just because two electrodes are physically close doesn't mean they are working together. Our GAT model taught us that the brain's "wiring" changes dynamically depending on whether you are moving your hand or your foot.

What's next for NeuroBeat

We aim to close the loop completely. Currently, our processing is offline; the next step is optimizing our GAT for real-time streaming on edge devices. We also want to explore the "Imagined" Movement track more deeply. Since the neural signatures of imagining a movement are similar to performing it, adapting NeuroBeat to work solely on imagination would open up this technology to paralyzed patients who cannot physically move their limbs at all.

Built With

  • eegnet
  • python
  • randomforest
  • resnet1d
  • sklearn
  • torch
Share this project:

Updates