Inspiration

This system aims to help teachers count students who raise their hands up so that teachers don't have to spend time counting them.

What it does

This is an object-detection system, which automatically counts the number of people who raise their hands up in classrooms/conferences settings. It can also count hands-up in real time using a webcam.

How I built it

This application uses computer vision techniques, to be more specific, the YOLO (You Only Look Once) version3 algorithm to accomplish the object detection. The system was first trained on a set of images where students raise their hands up in classes. The images were obtained by using an online-image-crawler. Then, the images were manually annotated by using ‘sloth’. The annotated images were then used to produced specific anchors that are different from the default settings. The anchors were automatically computed by using K-Means algorithm in Sklearn. The images were then trained for 6 hours.

What's next for YOLO-HandsupCounting

Emotion detection for helping teachers better understand if students are off mind.

References
Paper: You Only Look Once: Unified, Real-Time Object Detection. https://arxiv.org/abs/1506.02640
Paper: YOLOv3: An Incremental Improvement. https://pjreddie.com/media/files/papers/YOLOv3.pdf
YOLO implementation I: https://github.com/eriklindernoren/PyTorch-YOLOv3
YOLO implementation II:https://github.com/ayooshkathuria/pytorch-yolo-v3
YOLO implementation III:https://github.com/pjreddie/darknet
Automatic Image Dowloading Tool: https://pypi.org/project/google-images-download/1.0.1/

Built With

Share this project:
×

Updates