Inspiration:

I was inspired to create ProctorEye after noticing the growing challenges of ensuring exam integrity in remote learning environments. With online assessments becoming more common, I saw an opportunity to leverage computer vision to create a non-intrusive, automated proctoring solution. The idea came from the need to maintain fairness and trust in online education by monitoring suspicious behavior in real time.

What We Learned:

Computer Vision Techniques: I deepened my understanding of OpenCV and MediaPipe for real-time face and eye detection, which are critical for tracking gaze and head movements.

Modular Programming: Building the project in distinct modules (face detection, gaze tracking, alert system) helped me appreciate the importance of separation of concerns and code reusability.

Rapid Prototyping: I learned how to quickly integrate and test various libraries, enabling rapid iterations and refinements of the solution.

Challenges of Real-World Deployment: I also learned about the challenges involved in making a robust, real-time system that works well across different environments and lighting conditions.

How We Built It:

We started by defining the project structure and identifying the core functionalities: detecting the face, tracking eye gaze, and logging suspicious behavior. Using MediaPipe, we implemented a face detector to extract facial landmarks, and then developed a simple algorithm to determine if the user was looking away from the screen. OpenCV provided the means to capture video from the webcam and display the results in real time. Finally, we added an alert system to log instances of suspicious behavior. The project was built using Python, with modular code that could be easily extended or integrated with web frameworks like Flask if needed.

Challenges Faced:

Real-Time Accuracy: Ensuring that the detection and tracking worked in real time was challenging. Variations in lighting and camera quality sometimes affected accuracy.

Threshold Tuning: Defining what constitutes “suspicious behavior” required experimentation. We had to fine-tune the thresholds for eye movement to avoid false positives while still capturing genuine cases of cheating.

System Integration: Integrating different libraries smoothly required careful handling of data formats and processing speeds, particularly when combining MediaPipe’s landmark detection with OpenCV’s real-time video capture.

Scalability and Robustness: While the initial prototype worked well in controlled conditions, scaling the solution for diverse environments and ensuring robust performance remains an ongoing challenge.

Built With

Share this project:

Updates