Inspiration

The core inspiration was the widespread frustration and inefficiency caused by manual assignment grading. As students, we've experienced the delays in feedback; as potential TAs and instructors, we understand the immense time sink and the challenge of maintaining objectivity across large classes. Our goal was to create a solution that not only grades instantly but learns and evolves to ensure continuous fairness and accuracy, transforming assessment from a bottleneck into a real-time diagnostic tool.

What it does

The AutoGraders is a self-evolving grading agent that provides instant, objective, and actionable feedback on academic submissions. Dynamic Calibration is our key differentiator. The agent self-calibrates its scoring model dynamically based on the quality of each new submission it processes. This ensures the grading standards remain precise and relevant as the data stream grows and the course progresses.

How we built it

Node JS, ChatGPT LLM

Challenges we ran into

Dynamically reviewing the quality of each new submission as the agent grades more and more submissions, and do retrospective review on previously graded submissions.

Accomplishments that we're proud of

  • Designing the entire flow from initial 10 manual grading, to average/std setting, to AI grading
  • Getting the LLM to match students' answers to the key bullet points and suggest a tag "good" "average" "poor'.

What we learned

  • LLM is still very limited. HITL is needed to review each submission's suggested grades.

What's next for The AutoGraders

Good context engineering in the future will enable retrospective review on previously graded submissions, based on the distribution of marks so far.

Built With

Share this project:

Updates