Inspiration

Most learning tools focus on teaching content, but very few focus on how students think while solving problems. As students, we noticed a common pattern: we often practiced more when we got answers wrong, but we rarely understood why our thinking failed in the first place.

Whether it was math, programming, or conceptual questions, the feedback was almost always limited to the final answer. This gap between mistakes and understanding inspired us to build ShadowTutor AI—a system that prioritizes diagnosing thinking errors instead of simply delivering solutions.


What it does

ShadowTutor AI is an intelligent learning assistant that analyzes a student’s problem-solving attempt and identifies how and why their reasoning went wrong.

Instead of immediately giving answers, ShadowTutor:

  • Diagnoses the type of mistake (conceptual, logical, assumption-based, or knowledge gap)
  • Explains the root cause behind the error
  • Reframes the correct way to think about the problem
  • Provides a guided hint rather than a full solution
  • Encourages reflection through a targeted question

By focusing on reasoning patterns instead of results, ShadowTutor helps learners correct their thinking process, making future learning faster and more effective.


How we built it

We designed ShadowTutor AI around a diagnostic-first architecture using large language models. The system prompt was carefully engineered to enforce strict reasoning rules—prioritizing analysis before answers and adapting explanations based on detected mistake patterns.

The application integrates:

  • A clean, minimal frontend for user interaction
  • Google Gemini API for reasoning and mistake diagnosis
  • Structured response formatting to ensure consistency and clarity
  • Session-based tracking to simulate recurring mistake detection

Our focus was not on building many features, but on making one core learning experience deeply effective.


Challenges we ran into

One of the biggest challenges was preventing the AI from behaving like a traditional tutor or chatbot. Large language models naturally try to provide direct answers, so we had to carefully design prompt constraints to enforce diagnostic behavior.

Another challenge was balancing explanation depth—ensuring responses were insightful without overwhelming the learner. Achieving clarity, adaptability, and educational value simultaneously required multiple iterations and testing.


Accomplishments that we're proud of

  • Successfully built a non-generic educational AI that focuses on reasoning, not answers
  • Designed a clear diagnostic framework for learning mistakes
  • Created a realistic, demo-ready product rather than a conceptual prototype
  • Delivered a clean and intuitive interface that highlights AI intelligence

What we learned

Through this project, we learned that effective educational AI is less about providing information and more about guiding thought processes. Prompt engineering, educational psychology, and user-centered design all play a crucial role in building meaningful learning tools.

We also learned how powerful AI can be when it is constrained with purpose rather than used broadly.


What's next for ShadowTutor AI

In the future, we plan to expand ShadowTutor AI by:

  • Tracking long-term mistake patterns across sessions
  • Supporting multiple domains such as competitive programming and exam preparation
  • Adding personalized learning paths based on recurring misconceptions
  • Integrating educator dashboards for classroom use

Our long-term vision is to make ShadowTutor AI a thinking companion that helps learners build strong reasoning foundations early in their education.

Built With

Share this project:

Updates