LearnPanta — The most expensive exam is the one you take twice.
TL;DR: LearnPanta is a crash-proof exam simulator that turns every missed question into a short, personalized lesson (voice + visuals + next steps). It combines accuracy with behavioral telemetry (timing, answer changes, focus) to recommend exactly what to practice next.
Intelligence is a process; skill is the outcome.
LearnPanta doesn’t just score you – it helps you build the process that produces passing‑level performance.
For: certification candidates and cohort‑based programs that need readiness
Gemini 3 Integration (central to LearnPanta)
Gemini 3 is the brain, eyes, and hands of the debrief. During a timed simulation, the frontend streams structured telemetry and scratchpad artifacts. Gemini 3 (thinking_level=high) turns this into coaching decisions: focus/fatigue flags, misconception hypotheses, and a next‑best‑teaching plan.” “For the debrief, Gemini generates a beat‑synced tutoring script plus tool calls that highlight text, focus the camera, and draw diagrams, so explanations are visual and timed with speech
Inspiration: the $300 FAIL screen
Retaking high-stakes exams is brutal—financially and emotionally. I started this hackathon undecided, and kept circling back to one painful truth: failure is expensive, and the prep industry is built to test people, not teach them. Most platforms give “right/wrong,” but rarely the why in a way that sticks.
So I built LearnPanta to make the debrief actually stick – so learners don’t pay for the same mistake twice.
What it does
LearnPanta is an AI-native exam simulation platform with three core pillars:
- Assessment fidelity (realistic, timed simulation)
- Behavioral telemetry (how you arrived at an answer, not just correctness)
- Adaptive remediation (precision practice instead of generic review)
Why it’s different
- Not “right/wrong.” We analyze timing patterns, answer changes, and focus events – because how you think often explains why you miss questions.
- Crash-proof by design. Every session is a Temporal workflow, so long simulations survive crashes/retries without losing evidence. Sessions are designed for long runtimes (hours).
- Fresh, grounded content. A Curator agent uses Google Search grounding to generate questions from official specs/syllabi so content doesn’t go stale like static banks.
How we built it
Durable, event-driven architecture
- Temporal orchestrates exam lifecycles (signals for telemetry, queries for state, activities for analysis).
- Metrics persist to analytics storage (TimescaleDB) while Temporal holds per-session state.
Multi-agent Gemini system (deep integration)
- Curator (Gemini Pro + grounding): syllabus → weighted topics → MCQs + explanations
- Analyzer (Gemini Flash): behavioral patterns → focus/fatigue/anomalies
- Feedback Provider (Gemini Pro): blends correctness + behavior + scratchpad evidence into structured coaching
- Debrief Orchestrator (Gemini Flash): generates beat-synced scripts (speech + cursor/canvas actions) for guided review
High-reasoning telemetry interpretation
Gemini 3 with thinking_level="high" processes streamed JSON telemetry and maps raw interaction patterns into coaching signals (focus/fatigue/anomalies) using a custom rubric.
Confidence score (flags “lucky guesses” and shaky mastery)
We compute a per-question confidence score from dwell time and answer switching:
confidence = (dwell_time / total_question_time) * switch_factor
Where switch_factor ∈ [0, 1] decreases as answer changes/tab switches increase. This helps distinguish confident mastery from unstable correctness, so the debrief targets the real misconception instead of just showing the right option.
Multimodal tutoring UX
We treat the frontend canvas as a first-class “tool” for the model:
- live diagrams and highlights (tldraw/canvas tooling)
- synchronized narration (TTS) + beat-synced UI events
- state coordination via UI state machines (XState)
Privacy-first telemetry
Optional signals are processed in-browser; only derived signals are sent, and raw video isn’t uploaded/stored.
Challenges
- Synchronizing voice narration with on-canvas highlights/draw steps (multimodal orchestration)
- Managing deep-reasoning latency while keeping the UI responsive (streaming + optimistic UX)
- Building for reliability (durable workflows) instead of a “best effort” hackathon demo
Accomplishments
- End-to-end product: landing → diagnostic → exam → report → debrief → dashboard
- Demo works end-to-end: timed simulation → submit → report → guided debrief → dashboard
- Production-grade reliability: Temporal durable workflows for long sessions
- Real differentiation: accuracy + behavioral telemetry → targeted remediation (not just a question bank)
What’s next
- Expand adaptive practice and reporting for campuses / business / government programs
- More real-time coaching modes (voice interruptions, richer “next action” loops)
- Content creation economy professional certificate holders
Third-party libraries & acknowledgements
LearnPanta is built on top of excellent open-source and third-party tooling. Special thanks to:
- tldraw — used for the interactive diagram/canvas layer in the AI Debrief.
Note: we’re currently using tldraw under a time-limited evaluation/trial license (100 days), expected to cover the hackathon and judging period.
Built With
- cloud-sql
- docker
- firebase
- google-gemini-3
- google-genai-sdk
- graphql
- kubernetes-(gke)
- mediapipe
- pinecone
- postgresql
- temporal
- tldraw
- xstate

Log in or sign up for Devpost to join the conversation.