Our inspiration

Our team came together on Friday night with two things in common: a strong drive to build something that could potentially transform the future for the better, and a shared passion for the intersection of technology and neuroscience. We’ve all seen sci-fi movies like Ready Player One where the future is lived purely through the internet using VR or AR headsets, and they often portray a dim view of what’s in store for us. We believed that a better future was possible–that headsets and data collection may be inevitable, but what we do with that data is up to us, the next generation, to decide. Our fate rests in our hands.

It’s hard to pin down any one source of inspiration, but one that made the biggest impact is The Diving Bell and the Butterfly by journalist Jean-Dominique Bauby, a memoir about his life before and after a massive stroke left him with locked-in syndrome. Locked-in syndrome is a condition where almost all voluntary muscles in a patient are paralyzed except for a few muscles that control eye blinking. The book is a deep foray into his feelings of despair, longing, and fleeting joy that come with being isolated inside a body that can comprehend everything around him but can’t interact with it. He wrote the entire book solely by blinking his left eyelid, taking over 200,000 blinks and more than one thousand hours of painstaking “dictation” to an assistant.

Bauby’s story is just one of many suffering from locked-in syndrome, and he may have been luckier than most with his ability, albeit limited, to communicate. Regardless, many people with locked-in syndrome suffer from a damning feeling of isolation, which Bauby put so beautifully. Imagine if we could open a direct channel of communication for those with the syndrome.

We believe that we can. EEGs, or electroencephalograms, detect brain activity in patients through contact between nodes and a person’s scalp, commonly used to diagnose and help treat patients with epilepsy or other disorders. But there’s so much more that is possible from human-computer interaction–what if people with locked-in syndrome could communicate their feelings directly, in real time?

Our Project

Given the time constraints of HackHarvard, we implemented a demo EEG interpretation model that could be scaled and improved upon in the future. We understood the time constraints put upon us–but scalability was a priority nevertheless. EEG data typically looks like several waves that analyze the activity of large groups of neurons in your brain, and these can be used in medicine to understand events such as seizures in epilepsy. An easily detectable signal is the muscle movement of your eyes when you blink, which show up as a clear peak on the graph (see images in our gallery). Taking inspiration from The Diving Bell and the Butterfly, we’ve built a communicator where anybody wearing the EEG headset can spell out words and sentences in live time by blinking, without an assistant, the help of machine learning, cloud computing, neurotechnology, or any complications on the user’s end.

Just like any piece of software–big or small–we needed to incorporate some sort of stack. For our front-end, we utilized a web app constructed with the React.js framework. Our decision to do so was primarily based off of our team’s prior experience with React.js rather than any benefits of its own volition. Of course, we should note that this inherently melds HTML and JavaScript into our user-facing stack. For our back-end, we utilize Python to read data from our EEG hardware’s proprietary software (EMOTIV EPOCx) to workable output. Outside of our stack, this output came in the form of basic csv files. Within our stack, we utilize one of the most potent tools we uncovered over the hackathon–Google Cloud. Google’s live database services via Firebase and opportunities for cloud computing instances allowed us to offload a sizable brunt of the computation off of our local instances. These two services worked in tandem. While it’s intuitive that we pushed our brain activity to a Firebase cloud database, we also found use in compressing the dimensionality of our data points beforehand by applying bloom filters–relatively complex transformations of data. Google’s tools opened the door for us to undergo intense computational tasks while sidestepping the computationally taxing downside. To link this data back to the user so that it could appear in our front end, we used a simple WebSocket to relay integer-based data on our detected blinks in a given time interval.

What We Learned + Challenges We Faced

  1. Technical capabilities of an EEG: First, we needed to learn how EEGs work. The EEG measures brain activity by detecting the electrical signals that neurons pass between them when they fire. The raw data that is returned is an array of 14 channels, each corresponding to one of the nodes on the headsets, with voltage plotted against time. We connect the headset to the computer using its software from EMOTIV EPOCx, then we stream data from the headset to our program as an HID agent.

  2. Real-Time Data Collection & Analysis: the EEG headset returns data at a rate of 120 Hz, or 120 data points per node per second. That quickly adds up if we’re continuously measuring brain activity to detect for blinks, which will slow the computer and our program down considerably. Our solution was to turn to Firebase and Google Cloud Compute Engine, for two main reasons: 1) the potential to alter dimensional complexity parsed into our local methods and 2) the scalability of delegating the compute-heavy work to the cloud, so that anybody could run this program even if they had only a Raspberry Pi. We take the raw data from the headset, pass it into Firebase which sends the data to Google Cloud Compute to compute its features–in our case, detecting individual blinks, using a Bloom filter (see next bullet). Then, it is passed back into Firebase which then communicates with our frontend.

  3. ML model using Bloom Filters (or finding a threshold to detect blinks): We needed to develop a method to characterize a blink in the data, so we turned to machine learning and Bloom Filters. Bloom filters are a space-efficient probabilistic data structure that test whether an element is a member of a set. It can be used to reduce the number of expensive database lookups by quickly eliminating non-members. We trained an ML model using these filters to detect our eye blinks, and we stored these computations in our Google Cloud integration.

  4. Cost: A big challenge to the scalability of our project was the cost of our headset: it was $1300 total, with funding generously provided by the PolySec Lab at California Polytechnic State University. We’ve found a solution to this in KerriganML, a startup founded by one of our members, who has built a $60 alternative to the headset using a low-noise analog application design.

  5. Alphabet: We needed a way for users to efficiently select letters using their blinks, so we turned to the trusty algorithm of binary search. Starting with the entire alphabet, if the user blinks, we narrow down the choices to the latter half. If the user does not blink, it goes to the first half. This goes on until it is narrowed down to a single letter, shown in the output.

Future Potential

Recent research on EEGs have demonstrated significant promise in many health domains, making us hopeful for a bright future with the advance of EEGs. For example, deep learning techniques have been explored for the classification of EEG motor imagery signals, offering insights into the brain's motor functions and potential applications in neurorehabilitation and brain-computer interfaces. (source) Another study delves into the potential of Stereotactic-EEG (sEEG) for Brain-Computer Interfaces (BCIs). sEEG, which measures electrophysiological brain activity using localized, penetrating depth electrodes, has been primarily used in identifying epileptogenic zones in refractory epilepsy cases. The study suggests that sEEG could be pivotal for long-term BCI applications, especially given the success of related deep-brain stimulation implants. source) In addition, emotion recognition through EEG analysis has emerged as a crucial concept in Artificial Intelligence. This technology holds immense potential in areas such as emotional health care, human-computer interaction, and multimedia content recommendation. (source).

With a cheaply and widely available headset (mentioned in Challenges section), we could democratize access to this technology, transforming the way we diagnose and treat neurological diseases and even the way data and communication travels from human to human and from human to computer. Imagine integrating the EEG into widely used VR headsets that everyone owns in the future, creating a direct network between our brains. Once we connect our technology to our brains, the possibilities are endless.

Built With

  • epocx
  • google-cloud-(&-firebase)
  • python
  • react.js
Share this project:

Updates