Inspiration

Every member of our team has stood at the front of a room. Between us, we've run workshops, mentored at hackathons, and served as teaching assistants at the University of Debrecen. We've all felt the same moment: you finish explaining something, you look out at the room, and you genuinely don't know who understood it, who is lost, and who checked out three sentences ago.

The feedback loop in teaching is broken. You only discover confusion when someone asks — which the shy student, the overwhelmed student, and the one who doesn't want to look bad in front of peers will never do. By the time you realise you've lost half the room, it's too late to recover the lesson.

Slate was built to fix that feedback loop.


What it does

Slate is a real-time classroom simulator. A teacher speaks — and instantly, a class of AI students reacts.

Each student is a persistent persona with a distinct personality, communication style, learning background, and emotional baseline. As the teacher talks, students pay attention, get confused, zone out, raise hands, whisper to each other, or quietly fall behind — all driven by their individual traits and by what they have collectively heard in the lesson so far.

A live dashboard surfaces what the teacher cannot see in a real classroom: who is lost, who has been waiting with their hand up, whose attention is fading, whether the class discussion is dominated by two students or distributed across everyone.

Slate doesn't just simulate a class. It shows a teacher, in real time, the invisible dynamics of their own teaching.


How we built it

We designed Slate around a core insight: students share the same observable classroom — what is said out loud — but their internal states (confusion, engagement, emotion) are private to each of them.

A lightweight orchestrator agent decides each round which students to actively simulate, keeping costs low and the simulation focused. Selected students receive their full persistent persona plus a shared class log of recent exchanges, then react in parallel via GPT-4o. Students not selected that round experience passive attention decay — no API call, no cost, still realistic.

Everything feeds into live KPIs computed client-side: engagement, confusion index, inclusion score, talk ratio, Bloom's taxonomy level, cold-call risk, and ignored hands. The system is built on Next.js, Azure Static Web Apps, Azure OpenAI, Azure Speech-to-Text, and Supabase — with state persisted in the browser for zero-friction demos.


Challenges we ran into

Context vs. cost. Students needed memory of the lesson — but sending full session history to every agent every round would make the system slow and expensive. We solved it with a rolling class log of recent exchanges: students know what was said, not what their peers privately felt.

Atomic state. Running multiple student agents concurrently risks partial updates and race conditions. We enforced a single server-side round that returns one complete, consistent classroom state — no partial merges on the client.

Realism without over-engineering. The temptation in simulation is to add more variables. We resisted. Persona is immutable; only state changes. Class mood is a derived summary, not a raw data dump. Every design decision was filtered through one question: does this make teaching feedback more useful, or just more complex?


Accomplishments that we're proud of

We built a simulation that reflects something we've all experienced personally — and that was the hardest part. It's easy to build a system that looks like a classroom. It's much harder to build one that feels like one.

We're proud that Slate surfaces the students who never raise their hand. The ones who get frustrated waiting. The ones who understood the first explanation but lost the thread three sentences later. Those are the students our team remembers from our own teaching — and they're the ones most existing edtech tools ignore.

We're also proud of the architecture: a system deployable today, in a real teacher training context, without any classroom hardware or institutional integration required.


What we learned

Teaching is a high-bandwidth job with almost no real-time feedback channel. Teachers are expected to lecture, manage behaviour, check comprehension, and maintain pace — simultaneously — with no instrument to measure what's actually landing.

We learned that the most valuable output of Slate isn't the student reactions. It's the moments it surfaces that a teacher would never see: the student who has had their hand up for three rounds, the cluster of confused faces after a complex explanation, the lesson where 80% of participation came from two people.

We also learned that simulation fidelity comes from constraints, not complexity. The class log, not raw state sharing. Passive decay, not constant agent calls. Immutable personas, not state that drifts unpredictably.


What's next for Slate

The immediate next step is putting Slate in front of real teachers — starting with our own network at the University of Debrecen and in the workshops and hackathons we run.

Beyond the demo, we want to build structured intervention suggestions: when Slate detects a confusion spike, it tells the teacher what to try next, not just that something is wrong. When inclusion is low, it surfaces which students haven't spoken and suggests a natural prompt to bring them in.

Longer term, Slate becomes a training environment for new teachers — a flight simulator for the classroom. Before you stand in front of thirty students, you practice on thirty agents. You learn your own patterns. You find out what kind of explanations lose people, and what brings them back.

Every teacher on our team would have wanted this before their first class. That's the simplest measure of whether Slate is worth building.

Built With

  • ai
  • azure
  • gpt
  • llms
  • nextjs
  • speach
+ 13 more
Share this project:

Updates