Inspiration
What if a board could listen and take notes by itself?
Teaching happens in motion, but classroom tools don’t move with it. Explanations evolve sentence by sentence, yet visual aids must either be prepared in advance or written manually, forcing teachers to pause their flow.
No amount of preparation can fully predict how a concept will unfold as it is explained. The idea behind Chalkless is a different kind of classroom interface — one where spoken reasoning itself becomes the source of structure.
A board that listens, understands progression, and draws alongside the teacher without interruption was not realistically possible until now. With Gemini 3’s real-time multimodal reasoning and long-context understanding, the teacher’s voice can finally become the visual medium.
What it does
Chalkless is a real-time classroom board that listens while teaching happens. As a teacher explains a concept out loud, Chalkless follows the flow of the explanation and builds structured visuals alongside it — without interrupting the lecture.
The generated board content reflects only what is spoken, appearing automatically as understanding forms. At the end of a session, this content can be exported as a PDF Study Guide, turning live teaching into persistent material.
Instead of preparing everything in advance or stopping mid-explanation to write, teachers teach naturally — and the board keeps up.
How I built it
Chalkless is built as a stateful, real-time system, not a prompt-based application.
The Tech Stack:
- Frontend: React + Vite (for speed and reactivity)
- Visualization: React Flow + Dagre (for automatic graph layout)
- AI Engine: Google Gemini 3 (Experimental) via the Google AI SDK
- Audio: Web Speech API (for low-latency local transcription)
The Architecture: Live audio is captured through the browser and cleaned locally to remove filler noise. These transcript segments are streamed to Gemini 3, which is prompted to act as an "Academic Scribe." Instead of responding to isolated inputs, the model reasons over the lecture history to track conceptual boundaries.
When a new concept is detected, the system triggers a State Update that dynamically re-renders the React Flow graph using the Dagre layout engine to ensure the visuals remain organized, regardless of how complex the lecture gets.
Challenges I ran into
One major challenge was preventing hallucinations. The board must never introduce concepts that were not spoken by the teacher. This required strict system prompting to constrain the model to a "Scribe" persona rather than an "Author" persona.
Another challenge was timing. Acting too early interrupts teaching flow, while acting too late reduces usefulness. Designing a system that knows when not to act was just as important as knowing when to respond.
Finally, operating in real time introduced latency and rate-limit constraints, requiring thoughtful orchestration to keep the system responsive and stable.
Accomplishments that I am proud of
- Demonstrating AI that acts during human thinking, not after
- Treating Gemini 3 as a reasoning agent, not a chatbot
- Generating structured visuals without changing teaching behavior
- Exploring a new human–AI collaboration model in education
- Maintaining trust by reflecting only spoken content
What I learned
This project reinforced that impactful AI systems are not built through better prompts alone. They require clear boundaries, state management, and intentional orchestration.
I also learned that multimodal AI becomes truly powerful when it listens, reasons, and acts over time, rather than responding to single inputs.
What's next for Chalkless: The AI Scribe
Future versions of Chalkless could support richer visual representations, adapt to different teaching styles, and improve understanding of classroom context.
Beyond classrooms, the same real-time listening and visualization approach could extend to meetings, workshops, and collaborative discussions, environments where spoken ideas often disappear without structure. In these settings, Chalkless could help teams capture reasoning as it happens, creating shared visual context without interrupting conversation.
Built With
- css
- dagre
- google-ai-sdk
- google-ai-studio
- google-gemini-3
- google-web-speech-api
- html
- react
- react-flow
- typescript
Log in or sign up for Devpost to join the conversation.