Inspiration
During exam season, I noticed how teachers in my college sat for long hours manually grading scanned handwritten answer sheets. They were often squinting at poor-quality images, navigating unintuitive dashboards, and constantly referring to model answers. This process not only caused fatigue but also overlooked creative or out-of-the-box responses from students.
This inspired me to ask a simple question: Can grading be fair, fast, and still recognize creativity?
About the Project
The project proposes an AI-powered grading system that automates the evaluation of handwritten answer sheets using rubrics—a predefined set of grading criteria. This approach moves away from rigid “model answers” and acknowledges varied, creative responses while saving time for educators.
The goal is to:
Reduce the manual effort and eye strain for teachers
Automate routine grading using rubrics
Detect and highlight creative answers using reasoning models
Scale the system across grades 1–12 in schools and boards
We also built Google Forms to validate the problems faced by teachers and collected qualitative insights to guide the solution design.
What I Learned
How to validate a real-world problem through user research
The importance of balancing automation with human judgment in education
Technical nuances of handwriting recognition, OCR, and rubric modeling
How difficult—and valuable—it is to detect creativity algorithmically
Challenges Faced
Simplifying the idea of “rubric-based grading” for non-technical educators
Handling the variability and poor quality of scanned handwritten sheets
Designing a system that scales across subjects and grade levels
Creating a user interface that is both powerful and fatigue-free
Log in or sign up for Devpost to join the conversation.