Inspiration
Duolingo proved that gamified, bite-sized lessons drive real retention, but it only works for languages. We kept wishing we could get that same addictive learn-and-play loop for everything else: HTTP fundamentals, financial modeling, music theory, anything. Meanwhile, AI can generate great educational content, but it always comes back as a wall of text. We wanted to close that gap: take Duolingo's proven game loop and make it work for any topic by replacing the hand-authored content pipeline with AI agents.
What it does
You type in any topic and LearnTS generates a full structured course in seconds: complete with a skill path, bite-sized lessons, and varied exercises (flashcards, multiple choice, fill-in-the-blank). Difficulty and exercise type adapt in real-time based on your performance. Get three answers wrong in a lesson and you're out: retry with hearts, just like Duolingo. Every answer gets an explanation so you're actually learning, not just guessing. XP, streaks, and "Perfect Lesson!" celebrations keep you coming back for one more round.
How we built it
The architecture is three AI agents plus a dumb renderer. Agent 1 (Course Architect) decomposes any topic into a structured curriculum with modules and lessons. Agent 2 (Card Generator) produces typed exercise JSON on demand based on the current difficulty and a type hint. Agent 3 (Adaptive Tutor) scores answers, explains them, and emits two signals: next difficulty and next exercise type, that feed back into Agent 2. The UI just switches on the JSON type field to render the right component. It has zero subject-matter knowledge. We built it as a single-file React app using the Gemini API for all three agents, with all state in memory.
Challenges we ran into
Getting the agents to reliably return valid, structured JSON was the biggest headache — one malformed response crashes the whole lesson flow. We added schema validation and a fallback-to-flashcard safety net. Latency was another fight; three sequential API calls per exercise adds up fast, so we started pre-generating the next exercise while the user reads the explanation. Tuning the adaptive loop was also tricky: early versions either spiraled to maximum difficulty after two correct answers or got stuck on easy mode.
Accomplishments that we're proud of
The system is genuinely content-agnostic. We tested it on topics from organic chemistry to hip-hop history and the lesson quality held up without changing a single line of code. The dual-axis adaptation — adjusting both difficulty and exercise type — makes sessions feel noticeably more natural than just "same format but harder." And the lesson structure with hearts and completion screens actually makes it feel like a game, not a quiz generator.
What we learned
Structured output from LLMs is powerful but fragile — you need to treat every agent response as untrusted input. We also learned that the "game feel" details matter enormously: adding a progress bar, a completion screen, and hearts transformed user perception from "this is a chatbot" to "this is a product." Finally, prompting three specialized agents is dramatically better than one monolithic prompt trying to do everything.
What's next for LearnTS
Matching and ordering exercise types are already specced and ready to build — they'll break up multiple-choice fatigue. Beyond that: user accounts with spaced repetition scheduling so LearnTS remembers what you got wrong and brings it back days later, branching skill trees where Agent 1 can define parallel learning tracks, a code challenge exercise type with sandboxed evaluation, and shared course links with leaderboards. The architecture is designed so every one of these is additive — no rewrites required.
Built With
- aisdk
- drizzle
- nextjs
- opencode
- postgresql
- typescript
- zod

Log in or sign up for Devpost to join the conversation.