💡 Inspiration
The inspiration for EduLens AI came from a place of academic burnout. As a student, I’ve been there—staring at a screen, mindlessly clicking through multiple-choice questions just to finish an assignment.
I realized that this kind of "recognition-based" testing leads to shallow learning. You might recognize the right answer, but can you actually explain it to someone else?
I wanted to build something that moves beyond the "pick B for Mitochondria" approach and enforces a true "Explain-First" mindset.
🚀 What it does
EduLens AI is an adaptive learning platform that replaces traditional quizzes with a Cognitive Audit.
Explain-First Pedagogy
Students explain concepts in their own words instead of selecting answers.Bloom’s Taxonomy Classification
The AI evaluates the depth of understanding—from basic remembering to advanced analysis.Spaced Repetition
I implemented the SM-2 algorithm to ensure long-term retention, not just short-term memorization.Live Educator Dashboard
Teachers get a real-time "God-view" of the classroom, including Bloom-level badges and misconception alerts as students type.
🛠️ How I built it
I designed the system with a high-performance, agentic architecture:
The Brain
Powered by Llama 3.1-8B via Groq, enabling sub-2 second response latency for a conversational experience.The Engine
Built with React and TypeScript, featuring a glassmorphism UI and Framer Motion for smooth transitions.The Logic
Implemented the SuperMemo-2 (SM-2) algorithm for mastery tracking and adaptive learning.Real-time Sync
Used WebSockets (ws) to create a live bridge between students and educators with minimal latency.
🚧 Challenges I ran into
The biggest challenge was balancing hallucination vs. pedagogy.
Early versions of the AI were too lenient—it would assign high scores if students used correct keywords, even when their reasoning was incorrect (e.g., "Mitochondria creates oxygen").
To fix this, I built:
- A Gatekeeper Agent to detect plagiarism and shallow responses
- A Diagnostic Agent with strict weighted scoring
After multiple prompt iterations, I achieved 83.3% accuracy in misconception detection, which was a major breakthrough.
🏆 Accomplishments I'm proud of
Sub-2s Feedback Loop
Achieved near-instant cognitive audits for a seamless user experience.Interactive Knowledge Graph
Built a 2D visualization that encourages students to explore dependencies and mastery paths.Agentic Reliability
My multi-agent system successfully filters out 56% of low-quality or plagiarized responses before grading.
🧠 What I learned
I learned that prompt engineering is essentially pedagogical engineering.
When building educational systems, you can’t just ask an LLM to "grade this"—you need to define the cognitive framework you're evaluating against.
I also realized that data validation is critical. Without a proper validation pipeline, it’s impossible to understand where the model fails until it impacts real users.
🚀 What's next for EduLens AI
Multi-modal Evidence Mode
Allow students to explain concepts using diagrams or voice input.Predictive Remediation
Build systems that anticipate misconceptions based on a student’s learning trajectory before they occur.
Built With
- drizzle-orm
- express.js
- framer-motion
- groq
- llama-3.1
- lucide-react
- neon
- node.js
- postgresql
- react-18
- shadcn-ui
- sm-2-algorithm
- tailwind-css
- typescript
- vercel
- vite
- websockets-(ws)
- zod
Log in or sign up for Devpost to join the conversation.