Inspiration

In participation based university classes, students fight for the privilege of speaking. Far too often students become disinterested and ignored because they have been looked over. the purpose of the software is to monitor the amount of times a student raises their hands and compare that to the amount of times they speak. Our goal is to increase transparency within the classroom. That could mean allowing professors to find their blind spots, finding biases, or a multitude of other applications

What it does

The system detects people and whether their hands are raised. The first mechanism that takes place in the system is video splicing. The system extracts one frame per second and submits it to a facial recognition software. The system then crops each frame to have instances of each person. Finally, the visual recognition software detects whether the person's arm is raised. The system then outputs graphs on an individual basis. displaying whether or not each student has their hand up at a given time.

How we built it

The whole application is made using python 2. We start by using OpenCV2 to splice the video into frames and extract those frames. We then use an opensource facial recognition library link to recognize the individuals in each frame. After expanding the borders of the face to include the whole body of the person, the system sends the contents of that border to the Clarifai based hand raised detector that we created. That information is then compiled into a set of arrays held in one two-dimensional array which is processed by MatPlotLib into a set of time based graphs. These graphs can then used by lecturers to enhance the students' experience and improve how they engage the class.

Challenges we ran into

One of our original challenges was strategic. we had find a method of breaking down a large image into individual pictures of each person. to do this we had to implement a separate facial recognition software to identify each person. we then expanded the facial border to include the entire body. we then cropped our borer and sent that to the hand raised identifier.

Another problem we faced was the overuse of IBM. we actually had other students delete our classifiers when using the Watson API. This forced us to move away from the Watson API to an opensource API called Clarifai.

Finally, we had issues downloading the OpenCV library. We went so far as to go to the StdLib people who did shed light on the topic but did not solve the issue. we were forced to change environments and start development on another computer.

Accomplishments that we're proud of

We are proud that we were able to create a program that uses three separate libraries simultaneously. As well, the program is actually extremely effective. We have seen over 80% accuracy on most images. We are also excited to enhance the classroom experience.

What we learned

This was our first attempt at using a neural network and a facial recognition software together. We learned how to manipulate videos and pictures to reflect the needs of the software. There was also a lot of trial and error with regards to choosing API's and that was certainly a learning experience.

What's next for Participator

the next step for Participator is to approach our first customer, Ivey. We will first approach for a trial to enhance the product to work for larger rooms, using them as test subjects. The reason is because Ivey is already equipped with high quality cameras and all their classes are participation based.

check out our demo here: link

Built With

  • clarifai
  • opencv
  • https://github.com/ageitgey/face-recognition
  • facial-detection
Share this project:
×

Updates