Inspiration:

For us CS Freshmen at Berkeley, class sizes are huge. Therefore, Moocs are usually used to give everyone easy access to course materials. However, one of the obvious downsides we have experienced is communication. In a small class setting, instructors can easily get student feedback through their facial expressions or directly asking for it, both of which are not possible for Moocs. But, instead of teachers watching for facial expressions, computers can do it. This is how we gained the idea of using image recognition to analyze students' emotions while they are watching a video lecture, so teachers can have an idea of how students are feeling about the material, and students can know what themselves and their peers find difficult.

What it does

While the students are watching video lectures online, Emooc will take photos of them in the background and after the students finish watching the lecture, it will use Azure cloud AI to analyze the photos taken. Based on Azure's emotion classification, it will show, in the form of a plot, which part of the course students find difficult and which part of the course they find easy.

How we built it

We use Python language to build our program. When we need to capture images, we use Opencv to access the camera and save pictures taken. And in order to recognize emotion from pictures, we use the Azure Face API. Lastly, after analyzing the result returned from the Azure API, we use the Matplotlib to generate a difficulty-time graph to give our users a clear idea about which part of the lecture is difficult.

Challenges we ran into

When we were building it, we found it difficult to recognize the difficulty of the class based on the emotion information returned from the Azure API. It often classified an image as indicating medium difficulty when the student is actually showing confusion. Therefore, to resolve this issue, we use multiple emotion attributes and calculate a linear combination of them to help us classify images more accurately.

Accomplishments that we're proud of

We are proud of the fact that after hours of brainstorming, coding, and debugging our program can perform as well as we expect it to. After starting the program, it automatically takes pictures and, with the help of the Azure cloud, quite accurately describes the difficulty level of the video lecture based on the student's facial expression.

What we learned

During the hours of coding, we have learned a lot. Three of the four members of our team are first-time hackers and all of us are in our first year of college. Through this experience, we have learned how to make use of online machine learning APIs such as Azure, how to use Python to perform operations on files and take pictures, and more 'macroscopically', how to organize a program with different parts and how to turn a broad idea into detailed plans.

What's next for Emooc

Emooc won't here of course. We have planned a lot for the future of Emooc. We want to build a platform for it so that teachers can actually see comprehensive and organized result for the whole class and integrate video into it so that students can be notified when the hard part comes and have a smoother experience using it.

Built With

Share this project:

Updates