Inspiration

Motivated by the project theme to build something that would be useful but also unique, I sought to explore the practical topic of video analytics application development, which is a skill I've developed over the past two years. As an enthusiast and regular practitioner of meditation, I began to explore what more I could contribute to the already immensely popular meditation app ecosystem. As I explored, I came across an interesting idea--that eye movement and patterns can be used as a "window into the soul". [1], [2]. Frequency of blinking, pupil dilation, eyebrow shape, and frequency of eye movement are all indicators of emotional state. Through this project, I set out on a mission to make meditation more personalized and tailored to each user by analyzing their eye movements to estimate the user's emotional attributes.

What it does

EyeMotion is an app (um, er, web app) that uses a 30-second camera recording of the user's eyes and face to predict 4 attributes of the user's emotion--calmness, focus, relaxation, and composure.

  • Calmness- reflected by head and face movement
  • Focus- reflected by steadiness of gaze
  • Relaxation- reflected by eyebrow and eye shape
  • Composure- composite score combining all 3 of these attributes

How we built it

Deep Learning Model and Attribute Estimation

The model is a simple convolutional network built using Tensorflow that processes one frame at a time, predicting basic emotional attributes for each. It was trained on this facial expression dataset from Kaggle. The scores for each attribute are computed according to following:

  • Calmness- movement of face keypoints across the screen (less = more calmness)
  • Focus- movement of eyes with respect to the face (less = more focus)
  • Relaxation- weighted average of model scores (e.g. positive/neutral emotions have positive weight, negative emotions have negative weight) ### Web App The web app was built purely using Node.js and html/javascript/css. Plotly package for Javascript was used to display the graphs in the analytics section.

Challenges we ran into

Challenge 1: Estimating Emotional Attributes

I was kind of stuck when thinking about how to train the deep learning model, as I couldn't find a dataset specifically for eyes/eyebrows and/or meditation-related emotions. The workaround I went with was to use a standard facial expression recognition dataset to estimate the existing attributes, as shown above

Challenge 2: Video Recording through Web App

Since it was the first time I was doing video recording through a web app, I ran into some trouble with working around the security issues that arose with Javascript.

Accomplishments that we're proud of

Happy to have finished and polished the design of the web app.

What's next for EyeMotion

Time-based Model

I didn't have time to do this, but one future goal might be to make the deep learning model more sophisticated, so that it can analyze frames sequentially and aggregate temporal information, rather than individually one after the other. The advantage here is that it would be able to unlock patterns that are difficult to grasp through basic algebraic formulas.

Custom Dataset

It would be nice to collect a custom dataset, with facial pictures of people meditating. This would enable more tailored and accurate predictions to be made by the model, as well as more detailed and precise suggestions to be offered to the user. One approach to do this is to allow users of the app to voluntarily enter into an anonymized data-sharing program.

Share this project:

Updates