Inspiration

To change the education sector by measuring attentiveness and incorporating efficient methods to teach students. The motivation behind this idea is to enable teachers to make their classes more engaging in order to put an end to ineffective learning.

What it does

Uses biometrics (EEG signals from Muse headband, eye tracking, facial expressions tracking) to evaluate how much an individual is attentive and intaking information. Provides feedback about each individual learner as well as an aggregate for a group of learners in a classroom or event setting.

How we built it

We measured EEG, PPG, and Gyro data with sensors on the muse headband. We created an OSC pipeline to stream all the information to our server in real-time. We then apply signal processing techniques (FFT, filtering, sampling, smoothing) and machine learning techniques (classification algorithm) to extract useful signals and detect anomalies. We then used this information to correlate with the users attentiveness when using the product. For example, research has shown that theta components in EEG correlates with individual attention.

We also used eye tracking and facial expressions tracking based on live video for identifying the attention level of people. We calculated the eye aspect ratio after detecting the eyes in the image frame and we figured out a suitable threshold for detecting the drowsiness alert by experimentation. We then used Google Cloud Vision API for identifying the various face expressions and analyzed ways in which we can correlate this information to the attentiveness of a person.

Challenges we ran into

  • We were not able to find comprehensive documentation and developer support for the Muse headband; therefore, we have to build our own data pipeline via OSC protocol.
  • Streaming of signals is challenging due to latency and size of the data.
  • Creating API and combining different libraries to analyze the data.

Accomplishments that we're proud of

  • We create a tool for understanding human learning in greater depth that could help increase students and teachers productivity.
  • We were able to successfully analyze the live video feed and identify the eye landmark positions and face expressions accurately and integrate them.

What we learned

  • When building the data pipeline to extract information from the sensor, we learned about UDP and OSC protocol. We also get to experiment with different signal processing and machine learning techniques to extract important information from the sensor data.
  • We learned to use the Google Cloud Vision API and the various services it provides.

What's next for NeuroLearn

We plan to scale NeuroLearn across a large number of learners both online and in physical classrooms. By aggregating and analyzing the data from different students, we believe that our software will empower teachers to make more informed decisions about their curriculum and teaching techniques. In terms of technical development, we plan to use more complex infrastructure such as spark, kafka, and google cloud to enable near real-time streaming and analytics.

Many Universities already have cameras set up that film students, therefore the infrastructure is already in place to support NeuroLearn en masse. The only thing remaining is for us to act first and seize on that potential.

Built With

Share this project:

Updates