Inspiration
The idea came from a simple frustration we’ve both experienced while learning: most systems tell you what is wrong, but never explain why your understanding failed. We realized that learners often repeat the same mistakes not because they’re careless, but because their mental model itself is broken. We wanted to build something that doesn’t judge answers — but understands thinking.
What it does
Misconception Decoder analyzes a learner’s explanation and identifies the underlying misconception pattern behind it. Instead of marking an answer incorrect, it explains where the reasoning went off track, what assumption failed, and how to correct the thinking. The focus is on diagnosing misunderstanding, not grading correctness.
How we built it
We built a lightweight web prototype using Lovable for rapid UI development and deployment. At the core, we used the Gemini API to reason over user explanations, infer the assumed mental model, compare it with the correct conceptual structure, and generate a clear explanation. The system follows a structured reasoning flow: understand → diagnose → explain.
Challenges we ran into
The biggest challenge was controlling scope. It was tempting to turn this into a full learning platform, but we deliberately kept it focused on one thing: misconception detection. Another challenge was designing prompts that encourage reasoning rather than just giving correct answers.
Accomplishments that we're proud of
We’re proud that the system explains mistakes without being discouraging. It doesn’t say “you’re wrong” — it says “this is the idea you assumed, and here’s why it didn’t work.” We also successfully built a working prototype under time pressure while coordinating remotely.
What we learned
We learned that real learning problems are often cognitive, not informational. We also learned how powerful large language models can be when used as reasoning tools, not just answer generators. Finally, we learned the importance of keeping a project simple, focused, and explainable.
What's next for Misconception Decoder
Next, we want to expand the misconception taxonomy and support more domains like science and programming. We also plan to add longitudinal tracking so learners can see how their thinking evolves over time. Ultimately, we want Misconception Decoder to become a thinking companion, not just a learning tool.
Built With
- css
- gemini-api
- lovable
- react
- tailwind
- typescript
- vite
Log in or sign up for Devpost to join the conversation.