Inspiration

Roughly 1 in 4 adults in the U.S. are living with disabilities. Specifically, for those with mobility impairments, existing Brain Computer Interfaces often remain confined to labs because they are either too invasive or too unreliable for real-world use. We wanted to build a non-invasive system that could bridge the gap between neural intent and assistive robotics, moving us closer to a future where high resolution motor control is accessible to everyone.

What it does

Our project utilizes 64-channel EEG recordings from 86 participants to decode motor intent. By processing full 4.1-second windows at 160 Hz, the system identifies subtle neural dynamics. This provides a foundation for next-generation assistive technologies, such as robotics and exoskeletons, that can be controlled through universal neural patterns.

How we built it

We developed a pipeline that progressed from classical signal processing to a specialized deep learning architecture:

  • Preprocessing: We addressed muscle contamination and artifacts using a bandpass filter, specifically targeting motor planning and execution bands.

  • Feature Engineering: We conducted ERD/ERS analysis to compute relative power changes over the sensorimotor cortex. We also utilized Common Spatial Patterns to provide clear class separation and Recursive Feature Elimination to identify the most predictive features.

  • Modeling: We first benchmarked classical ML models with XGBoost providing the best baseline at 37.11% accuracy.

  • EEGNet: To handle complex spatial-temporal patterns, we implemented EEGNet, a specialized CNN. It uses a three-stage process: temporal convolutions for frequency filters, depthwise convolutions for spatial filters, and separable convolutions to summarize patterns while preventing overfitting.

Challenges we ran into

We had to deal with noise from the EEG data due to potential muscle contamination and physiological artifacts, meaning lots of filtering was required. Also, even with extensive feature engineering, our classical ML models struggled to perform well potentially due to limitations in terms of dealing with complex, non-linear patterns.

What we learned

Through our testing, we discovered that treating EEG data as a spatial-temporal image is far more effective than shallow modeling. Our EEGNet model achieved a test accuracy of 62.19%, a significant improvement over our baseline. This proved that end to end deep learning can capture the nuanced neural signatures required for multiclass limb movement classification.

Share this project:

Updates