Inspiration

In the new era of AI-powered learning, access to knowledge has become instant—but understanding remains largely unmeasured. Most platforms reward course completion, time spent, or assignment submission, without truly capturing how well a learner understands a topic. This gap inspired us to build a system that goes beyond completion metrics and evaluates real understanding through interaction, reasoning, and conversation with AI.

What it does

IntelliGrad is an AI-driven education platform that measures a learner’s true understanding of a topic rather than just tracking progress or completion. Learners interact naturally with an AI agent—asking questions, explaining concepts, and answering subjective quizzes. The platform uses intelligent cross-questioning and structured evaluation to assess reasoning depth, conceptual clarity, and consistency over time, producing an Understanding Score that reflects actual learning.

How we built it

We designed the platform around structured learning and controlled AI evaluation:

Course content is organized into modules and topics, stored as Markdown and structured JSON contexts.

An AI agent handles learning conversations, while a separate evaluation flow extracts understanding signals.

User interactions trigger targeted cross-questions to validate understanding.

AI generates structured evaluation signals, and the backend computes deterministic scores.

Scores evolve over time at the topic and module level, ensuring fairness and resistance to gaming.

Challenges we ran into

One of the biggest challenges was separating learning from evaluation. Scoring conversational interactions without discouraging curiosity required careful design. Another challenge was ensuring that AI did not act as the final judge—keeping scoring logic deterministic and auditable at the backend level. Designing cross-questioning that reveals understanding without giving hints was also non-trivial.

Accomplishments that we're proud of

Built a system that evaluates understanding, not just answers

Designed a fair and scalable understanding-based scoring model

Successfully integrated conversational AI with structured evaluation

Created a platform that mirrors how human tutors assess learners

Ensured the system is hard to game and audit-ready

What we learned

We learned that effective AI education systems must balance flexible learning with strict evaluation. Conversations are powerful learning tools, but only controlled evaluation moments should influence scores. We also learned the importance of backend-owned memory, context boundaries, and explainable scoring in building trust with learners and institutions.

What's next for IntelliGrad

Next, we plan to:

Expand support for more subjects and advanced modules

Introduce certificates and skill-level benchmarks

Build analytics dashboards for educators and corporate training teams

Integrate IntelliGrad with LMS and HR systems

Enhance adaptive learning paths based on long-term understanding trends

Built With

Share this project:

Updates