The Problem
AI coding agents have a cognitive offloading problem. Developers ask the AI to build something, the AI builds it, and the developer ships code they don't understand. This creates invisible technical debt -- not in the codebase, but in the developer's brain. When something breaks at 2 AM, the developer who "built" the feature can't debug it because they never actually understood it.
Our Solution
We built Learn Mode -- a new native agent for Kilo Code (https://github.com/Kilo-Org/kilo), an open-source AI coding CLI with 500+ model support. When activated, Learn Mode:
- Implements code fully -- no degraded coding experience
- Pauses after implementation to ask 2-3 comprehension questions that reference specific functions, variables, and file paths from the code just written
- Tracks understanding using a persistent learning tracker with four question categories (Comprehension, Reasoning, System Thinking, Edge Cases)
- Auto-calibrates difficulty using a sliding window algorithm over the last 5 answers -- beginners get "what does this do?" questions, advanced users get "what could go wrong?" challenges
- Persists progress across sessions via a project-scoped aggregate so your learning history survives session boundaries
- Shows real-time status in the TUI prompt bar via Server-Sent Events
What Makes This Different
- Not a quiz app. It's embedded directly in the coding workflow. You code normally; the mentoring happens inline.
- Not generic questions. Every question references real identifiers from the code that was just written: "What does calibrate() return when there are 2 wrong answers in the sliding window?"
- Respects developer flow. Say "skip" to skip any question, "flow mode" to disable questions for the current task, or "done" to get a learning summary.
- Data-driven, not vibes. A Zod-validated tracker records every check with quality, category, and concepts. Calibration is algorithmic, not LLM guessing.
Technical Depth
This isn't a prompt wrapper. We built across every layer of Kilo's architecture:
- Agent system — Native learn agent registered in agent.ts with custom permissions
- Tool system — learnwrite and learnread tools with permission gating (denied globally, allowed only for Learn agent)
- Storage — Filesystem-backed tracker at ~/.local/share/kilo/storage/learn/{sessionID}.json
- Bus events — learn.updated event published on every record, enabling real-time propagation
- Server — GET /session/:sessionID/learn and GET /session/learn-aggregate HTTP endpoints with OpenAPI docs
- SDK — Auto-generated TypeScript SDK with LearnState, LearnAggregate types and typed client methods
- TUI — Real-time prompt bar indicator showing level + check count, synced via SSE
- Persistence — Project-scoped cross-session aggregate keyed by git root commit hash
- Tests — 34 tests covering tracker logic, tool behavior, calibration edge cases, aggregate persistence
Progress Summary
- Started from zero -- no prior Learn Mode code existed in Kilo
- Built a complete, tested, production-quality feature touching 15+ files across 5 packages
- Zero test regressions on the existing 1167-test suite
- Clean typecheck (tsgo --noEmit) with no errors
- Follows Kilo's fork merge process with kilocode_change markers for upstream compatibility
The Bigger Vision
AI should make developers think harder, not less. If this approach proves effective, it could be extended to:
- Team learning dashboards -- aggregate understanding across a team to find knowledge gaps
- Onboarding acceleration -- new developers learn the codebase by building with the AI mentor
- Code review enhancement -- the learning log becomes evidence of understanding for PR reviews
Built With
- TypeScript
- Bun
- SolidJS
- Hono (HTTP server)
- Zod (schema validation)
- OpenTUI (terminal UI framework)
- Server-Sent Events (real-time sync)
- OpenAPI / hono-openapi (API spec generation)
- Kilo Code CLI (open source AI coding agent)
Built With
- bun
- hono
- kilo
- openapi
- opentui
- server-sent-events
- solidjs
- typescript
Log in or sign up for Devpost to join the conversation.