Facing with hundreds of online attendees on a single computer screen, can professors find an elegant way to test students' understanding of the course materials? Some students are just too shy to ask questions when they are confused. Or sometimes the professor is going too fast that students, though confused, don't have time to ask questions. So we are inspired to make a software to help professors quickly find out how many students are confused so that they can slow down or explain more at some points.
What it does
Our program is a python program running on the backend. When the professor is doing zoom lecture with the students, the program takes a screenshot of the zoom's video gallery. Then the program will split the whole picture into individual pictures and send them to the Azure server. The model we have trained will classify the individual pictures as "confused" or "unconfused". Then the confused/all ratio is calculated. If the ratio is higher than 50%, an alert is sent to the professor indicating that the students may no understand the materials very well.
How I built it
We created our own dataset of confused / not-confused faces to train the model on Custom Vision. The program runs on python and takes screenshot of the meeting on regular intervals, process it to retrieve the individual faces and then determines whether or not each student is confused or not.
Challenges I ran into
At first, we wanted to use the pre-trained Face api under Azure Cognitive Services to detect students' facial expressions. However, we found that this model only includes emotions like "happiness" and "sadness", but without "confusion". So, we decided to train our own model with our own pictures using the Azure Custom Vision. And it turned out to make a pretty good prediction.
Accomplishments that I'm proud of / What I learned
We have learnt the powerful Azure Cognitive Services and how to integrate them with our own python program.
What's next for Online Class? No Confusion!
After the hackathon, we want to train a model with a larger database that contains more pictures labeled as confused. In this way, we can predict the "confusion" expression with higher precision. We also want to try to integrate APIs from Zoom and other online chatting softwares so that we can directly get the video stream instead of taking screenshot. We may even collaborate with Zoom, lnc. to integrate the confusion detection feature into Zoom for educational purpose.