Inspiration
I've always been fascinated by the idea of controlling machines with just our thoughts. When I saw this challenge on decoding motor imagery from EEG signals, I knew I had to try it.
The motivation became real when I learned that over 25% of adults in the U.S. experience some form of disability, with mobility impairments being the second most common. If we can reliably decode which limb someone intends to move, we move closer to enabling intuitive control of prosthetics, wheelchairs, or assistive devices.
That’s not science fiction anymore, it’s something I could actually work on this weekend.
What it does
NEXUS (Neural Extraction & Understanding System) classifies EEG brain signals into motor imagery categories.
Primary task: Predicts which limb a person is imagining moving
(left hand, right hand, both hands, or both feet)Advanced task: Additionally determines whether the movement was
executed or imagined
Performance:
- 76.66% accuracy on the primary task
- 68.99% accuracy on the advanced task
This significantly outperforms the 25% random baseline.
How we built it
I started by exploring the raw EEG data through:
- Power spectral density plots
- Topographic maps
- ERD/ERS visualizations
These confirmed that the mu rhythm (8–12 Hz) contains meaningful motor imagery information. For example, left-hand imagery suppresses activity in the right motor cortex, and vice versa.
Key technique: Euclidean Alignment (EA)
Cross-subject variability is massive in EEG. Euclidean Alignment aligns each subject’s covariance matrices to a shared reference space, dramatically improving generalization. This alone boosted accuracy by 5–8%.
Models
I built a 9-model ensemble, combining classic, deep learning, and graph-based approaches:
Classic BCI
- CSP + SVM (highest ensemble weight — surprisingly effective)
CNN-based
- EEGNet
- EEGNet-Attention
- ShallowConvNet
- EEGNet
Transformer-based
- CTNet
- ATCNet
- EEGConformer
- CTNet
Graph / Multi-scale
- STGNN
- MixNet
- STGNN
I used Dirichlet random search to find optimal ensemble weights.
For the advanced task, all models were modified to use dual output heads:
- One for limb classification
- One for execution vs imagination
These were trained jointly in a multi-task learning setup.
Challenges we ran into
Cross-subject generalization was brutal.
Models that performed well on training subjects often failed on unseen individuals. EA helped, but tuning its regularization was tricky, too little caused numerical instability, too much erased useful signal.NaNs in attention mechanisms.
Self-attention softmax layers can explode with extreme values. I fixed this by clamping activations and usingtorch.finfo(torch.float32).minfor masking instead of-inf.Training time constraints.
Each model took 30–60 minutes to train, forcing careful prioritization during experimentation.
Accomplishments that we're proud of
- 76.66% accuracy on the primary task (~3× better than chance)
- 68.99% accuracy on the advanced dual-task problem
- CSP + SVM (from 2008!) earning the highest ensemble weight (19%)
- A complete, end-to-end pipeline from raw EEG to submission files
- Visualizations that genuinely helped interpret what the models learned
What we learned
Domain knowledge matters.
Understanding mu rhythms, ERD/ERS, and motor cortex localization directly improved model design and validation.Ensembles beat single models.
No individual architecture outperformed the ensemble.EEG transfer learning is hard.
Unlike images, EEG varies wildly between people, explicit alignment is essential.Old methods still work.
CSP captures spatial patterns that even modern deep networks can miss.
What's next for NEXUS
- Subject-adaptive fine-tuning using a small number of labeled trials
- Online BCI integration for real-time decoding
- Riemannian geometry methods operating directly on covariance manifolds
- Contrastive pre-training for subject-invariant representations
- Multimodal fusion with EMG or eye tracking
This was my first serious EEG project, and I learned more in one weekend than I expected.
The brain is messy, noisy, and different for everyone, and that’s exactly what makes it interesting.


Log in or sign up for Devpost to join the conversation.