Inspiration

Studying too often turns into memorization disguised as understanding. We've all felt "prepared" for an exam, only to blank on those overly complicated thinking questions. TruLearn was born from that frustration - we wanted to build an AI tutor that can tell the difference between a student who copied a definition and one who actually gets it. Instead of just giving you a grade, TruLearn adapts to both your weaknesses and strengths, continuing to test you until those tricky thinking questions finally click.

What it does

TruLearn is an AI-powered adaptive learning platform. Students upload their study materials (PDFs of notes, textbooks, or guides) and receive a personalized quiz - a smart mix of multiple-choice and open-ended questions generated from their content. After submission, TruLearn analyzes each response using NLP to determine whether the student genuinely understands the material or is relying on surface-level memorization. Based on performance, difficulty adjusts automatically: concepts you've mastered get harder, and weak areas get easier - creating a continuous learning loop until you feel confident enough to move on to another study set.

How we built it

The frontend is built with React and Material UI for a clean, responsive experience. Users upload a PDF, which is processed by a Flask backend. The content is summarized and transformed into structured quiz questions using Google's Gemini LLM. Question types are also designed to adapt to the content (for ex. study references containing less definitions and vocabulary will be more open-reponse based). Student answers are then evaluated using a dual-model ML pipeline - sentence similarity detection (MiniLM) flags potential memorization, while natural language inference (DeBERTa) assesses correctness. The frontend dynamically adjusts question difficulty based on these results, creating a fully adaptive study session.

Challenges we ran into

The hardest challenge was designing logic that distinguishes real understanding from surface-level answers. Prompt engineering, response evaluation, and maintaining consistency across regenerated questions required careful iteration. Balancing accuracy with speed was also a key challenge as certain API requests would take far too long (timeout started to be pretty common well into the project).

Accomplishments that we're proud of

Even though we were only a two-person team, we built a full-stack adaptive learning platform that genuinely works. We're proud to have been able to make something which is something that was not only fun to make, but also something that could have a genuine impact on the future of education.

What we learned

We learned how to design adaptive AI systems, structure meaningful evaluation pipelines, and integrate multiple AI services into a cohesive product. More importantly, we learned how AI can be used to improve how people learn instead

Share this project:

Updates