Professors are continuously striving to improve their lecture material by asking for feedback. For example, at the end of every quarter, professors ask their students to evaluate their course content. Now with, professors can acquire objective feedback on how engaging each segment of every lecture was.

What it does is an application that allows professors to upload a video of their lecture and see their class' attentiveness during each segment of their lecture. We used Computer Vision on the students' faces along with Speech-to-text on their lecture content to figure out how attentive students were at different points of the lecture.

How we built it

Google Cloud Speech API Google Cloud Vision API [React/Express/Node].js Python Hard work

Challenges we ran into

Formulating an attentiveness metric from face-emotions returned by the Google Vision API was challenging.

Integrating a python backend with a JavaScript frontend was more challenging than we expected and something we spent a lot of time debugging.

Accomplishments that we're proud of

  1. Coming up with a non-deterministic attentiveness metric that was a function of people's emotions.
  2. Learning Node and Express from scratch and building a functional app with them.

What we learned

Teamwork, Improved programming skills

What's next for

Share this project: