Inspiration -

Every pilot trains in a simulator before they ever touch a real cockpit. Every surgeon practices on models before they operate on a patient. But teachers? They walk into a real classroom with real children and sink or swim.

Studies consistently show that 30–50% of new teachers leave the profession within their first five years, with classroom management and unpreparedness cited as the top drivers. Yet teacher training programs still rely on a handful of supervised practicals, a few weeks of student teaching, and a whole lot of hope.

We asked one question: what if teachers had a flight simulator? That question became TeachLab.

What it does

TeachLab drops you into a live AI classroom with five deeply realized student personas:

  • Maya — the eager overachiever who finishes your sentences and asks for harder material
  • Carlos — an ESL student who shuts down when vocabulary becomes a barrier
  • Jake — distracted and ADHD-coded, brilliant when engaged but mentally elsewhere the rest of the time
  • Priya — quiet and anxious, knows the answer but is terrified of being wrong in front of others
  • Marcus — the skeptical critical thinker who debates everything and disengages the moment he's told to "just memorize it"

You speak into a microphone. Your words are transcribed, sent to an orchestrator agent, and the right students respond, adapted to the grade level you selected, with distinct synthesized voices. A Whisper Coach delivers live teaching hints in the sidebar. Real-time Engagement and Comprehension gauges show how the room is tracking. ou can inject chaos a, distracted student acts out, the class loses focus then recover.

After the session, a full Teaching Autopsy dashboard shows engagement timelines per student, comprehension and participation scores, AI coaching feedback, and a frank turn-by-turn analysis of what worked and what didn't.

How we built it

The backend runs on FastAPI with a WebSocket layer for real-time communication. Every teacher utterance flows through this pipeline:

Voice → Azure STT → Transcript → Orchestrator (GPT-4o) → Student Agents (parallel) → Azure TTS → Classroom

Each student continuously tracks engagement and comprehension, which drift passively every turn based on emotional state, even when a student isn't speaking. Bored students disengage quietly in the background, just like real classrooms.

Every architectural decision was made with a 30-hour constraint in mind:

  • No database — session state lives in memory. Fast, clean, no overhead.
  • Single orchestrator + 5 persona agents — a clear, debuggable hierarchy instead of a complex mesh.
  • Grade-level adaptation via prompt injection — GPT-4o handles the behavioral shift between a Grade 4 and a Grade 12 classroom without a specialized model.
  • Pipelined TTS — while student N's audio generates, student N+1's LLM call is already running, cutting perceived latency significantly.
  • Rule-based Whisper Coach — instant, deterministic, and just as effective as an LLM for priority-ordered live hints. We saved our model budget for the personas.
  • Chaos reuses the orchestrator pipeline — a disruption event is injected as teacher context. No new code path, maximum authenticity.

The frontend is React + TypeScript with Zustand for global state, Framer Motion for avatar animations, and Recharts for the post-session engagement timeline.

Challenges we ran into

Making five agents feel like a classroom, not a chatbot. Early versions felt like five assistants politely taking turns. Real classrooms are messier, people disengage, react to tone, respond at the wrong moment. We rewrote persona prompts multiple times, tuned engagement drift rates, and added consecutive-turn limits to prevent any one student from dominating.

Latency. Running an orchestrator plus up to five student agents plus TTS per turn is slow by default. Parallel agent execution and pipelined TTS helped significantly, but this remains the most obvious ceiling for production-scale use.

Emotional state coherence. A student's emotional state needs to evolve believably across a session. Too much drift and the classroom feels random; too little and it feels static. We hand-tuned drift deltas per emotional state and relied on per-response LLM state updates to keep things grounded in what the teacher actually said.

Scope discipline. We had a list of features (multiplayer co-teaching, students interrupting each other, dynamic lesson plan injection) and we cut all of them. The hardest decision of the hackathon was choosing what not to build.

Accomplishments that we're proud of

  • Five fully realized student personas that behave coherently across an entire session, each with distinct vocabulary, emotional triggers, and grade-level speech patterns.
  • A real-time classroom that genuinely feels alive: avatars that react, voices that differ, engagement bars that drift, a chaos system that disrupts mid-lesson.
  • A post-session Teaching Autopsy that reads like real coaching feedback, not a generic summary.
  • A pipelined multi-agent architecture that ships working, debuggable, and extensible in under 30 hours.
  • The demo moment: watching a first-time user speak into a mic, see five students react, hit the Chaos Button, receive a full performance debrief, and immediately understand exactly what the product is for.

What we learned

We came in thinking this was an AI project. We left knowing it was a character writing project backed by AI infrastructure. The technology worked within the first few hours. Most of the time were spent making Maya feel like Maya and Jake feel like Jake, making the classroom feel real enough that a teacher in front of it would actually practice.

We also learned that multi-agent systems are most powerful when their boundaries are clean. The orchestrator decides who speaks. The persona agents decide what they say. The feedback engine decides what it means. Each layer has one job and that made debugging tractable and the system extensible.

What's next for TeachLab

  • Curriculum integration — let teachers upload a lesson plan or learning objectives and have the AI adapt student behavior to the actual material being taught.
  • Student interruption model — students reacting to each other, not just to the teacher, for a far more authentic classroom dynamic.
  • Expanded persona library — additional archetypes covering a wider range of learning needs, cultural backgrounds, and behavioral profiles.
  • Longitudinal tracking — a teacher profile that tracks improvement across multiple sessions, highlighting growth in specific dimensions like inclusivity, pacing, and questioning technique.
  • Institutional deployment — packaged for teacher training programs, education colleges, and school district onboarding pipelines.

The mission is simple: no teacher should face a real classroom without having practiced in one first.

Built With

Share this project:

Updates