Winner of Qualtrics Best Data Visualization Hack!


Face Sentiment Analysis to Improve Teaching


It can sometimes be difficult for teachers to both focus on teaching, while scanning the audience for confused faces. At the intersection of image feature recognition as well as the revolution in Augmented and Virtual reality, we can leverage the Hololens' headset to capture student's and audience sentiment, and provide realtime feedback for the lecturer.


  • Provide insightful, and realtime sentiment feedback for teachers
  • Historic database for student "Learning" profiles


  • Hololens Headset (Input)
  • MSFT Cognitive API
    • Face Recognition
    • Emotion Analysis
  • OpenCV API (Face Detection -- Experimented with, threw away eventually to reduce latency)
  • FireBase
  • Flask Web App
    • Realtime-updating Student Profile on FireBase
    • Hosted on Microsoft Azure Ubuntu VM
  • UX/UI
    • Webapp + History Visualization
    • Hololens AR LiveStream + Sentiment Visualization


  1. Detect # of faces
  2. Read sentiment analysis on faces
  3. Compute "Confusion Score" on faces
  4. Upstream student profile and attribute confusion score
  5. Render UI in HL

Future Features

  • Implement a different ML Inference Algorithm to obtain access to comprehension scores
  • HoloLens Microphone to correlate lecture topics with comprehension scores

Technical Challenges

HoloLens & Unity 5

The issues from the HoloLens Headset primarily stemmed from hardware-software integration with the Unity 5 library -- particularly in compatible libraries for data management.

  • Sharing bulk byte data directly from a HoloLens Headset without the use of the developer's portal is unprecedented innovation. Due to HoloLen's lacking strong support for Unity's mature HTTP request libraries, we spent 8+ hours alone on hacking HoloLen's HTTP Request service to upstream the detected faces to the Microsoft Cognitive API. Eventually finding a lone Japanese developer's code randomly posted on an obscure forum, we were able to resolve this hurdle.
  • After resolving the HTTP Request issues involving Unity and HoloLens, to support the Microsoft Cognitive API and our FireBase Database, JSON serialization and encoding in Unity proved to be a huge pain as there seriously lacked compatible APIs that would run Unity's software and on HoloLens hardware. It was only until the very last hours which we finally found a solution which would allow us to deserialize JSON to fit our needs for data visualization in both our webapp and HoloLens headset.

Data Mining, ML, and Decision Trees

  • Early while discussing the feasibility of our project, we had realized that the various computer vision and sentiment analysis libraries simply did not have the functionality to solve our core problem statement: assist teachers in helping confused students. With both the Google Vision API, Microsoft's Cognitive Services, and Clarifai's APIs only providing basic sentiment analysis with simple emotions such as "Sadness", "Happiness" and the like, we had to quickly look into other solutions to implement a "confused student" detector. At first we didn't understand the problem at a technical level and tried implementing a KNearestNeighbors algorithm to determine separate clusters between "confused" and "attentive" students, we quickly identified our misunderstanding and successfully implemented an accurate and low-CPU intensive Decision Tree algorithm in Sci-Learn instead.
  1. We had to first mine Google Image search for sample training data for both thousands of "confused" faces, and many more thousands of "non-confused" faces.
  2. After scraping with JS, downloading with Python, and then upserting and extracting enhanced metadata via Microsoft's Cognitive API we could differentiate from the 1-dimensional sentiment data of to feed into a decision tree.

Built With

Share this project: