Our Tesseract Story
What Inspired Us
Learning a new skill—whether React, Python, or machine learning—often feels overwhelming. We noticed people usually know what they want to learn but get stuck on how: where to start, what order to follow, and how to know when they’re ready for the next step. Most learning platforms offer fixed curricula that ignore prior knowledge, goals, and learning style.
We built Tesseract to fix that. The idea was an AI that works like a personal learning companion: it creates structured roadmaps from your goals, adapts to your background and preferences, and guides you through modules with the right resources. Instead of generic courses, we wanted adaptive paths shaped by who you are and what you already know.
How We Built It
Tesseract is a full-stack web application. The frontend uses Next.js 14 (App Router), React, TypeScript, and Tailwind CSS, with Framer Motion for transitions between the module grid and the detailed module view. State is managed with Zustand. The layout includes a header, collapsible sidebar, main content that switches between module grid and single-module view, and a slide-out chat panel.
On the backend, we use Supabase for auth and storage. The schema covers user profiles, skills (with learning preferences), nodes (modules with labels, descriptions, tiers, prerequisites, status, resource counts), node resources (videos, articles, docs), notes, chat messages, and learning events. Row Level Security keeps each user’s data isolated.
The AI layer uses Anthropic Claude. When a user adds a skill, we call Claude with a system prompt that defines a JSON roadmap: 6–12 modules with id, order, label, description, estimated time, key topics, and resource counts. We parse the response and store it as skills and nodes in Supabase. Claude also generates module explanations, powers context-aware chat, and evaluates suggested resources. For visualization we use ReactFlow and Dagre.
Flow: user signs up → completes profile → adds a skill via intake modal → AI generates roadmap → user sees modules in a grid → clicks a module for details, AI explanation, resources, notes → marks concepts as known/in progress/complete → prerequisites unlock later modules → can open chat at any time.
Challenges We Faced
Reliable structured output from Claude was difficult. We needed valid JSON roadmaps, but responses were sometimes wrapped in markdown or had extra text. We solved this with strict system prompts, stripping code fences before parsing, and clear error handling. We also handled cases like API credit exhaustion.
Context-aware chat required careful design. The tutor needed the current skill, completed modules, available concepts, and the module the user was viewing. We inject roadmap context into every request so answers stay relevant and avoid suggesting topics the user has already completed.
Supabase Row Level Security took iteration. We modeled policies through the chain (nodes → skills → users) and tested insert, update, and delete paths to avoid cross-user access.
Recomputing module status from prerequisites was subtle. When a user marks a concept complete, we recalculate which modules stay locked. We built utilities that walk the graph, check prerequisite completion, and update status, and kept this in sync with UI updates.
What We Learned
We learned how to design system prompts for structured AI outputs and how to validate and clean responses on the server.
We gained experience with Supabase RLS and multi-tenant access, including auth flow and client vs. server instances.
We worked through the interplay of streaming chat and UI state, and how to structure Zustand for skills, nodes, profile, and panel visibility.
We also saw that personalization matters: roadmaps that adapt to experience, prior knowledge, time per week, and learning style feel much more useful than generic curricula.
Built With
- claude
- cursor
- fluid-ui
- google-cloud
- python
- supabase
- tavily
Log in or sign up for Devpost to join the conversation.