Inspiration

In college, study groups often form by proximity. Nobody is matching you on what you actually need to learn this week vs what you can teach.

And AI tutors are weirdly isolating. You learn by having to explain it. There's a well-replicated effect in education research called the protégé effect, people who learn a concept in order to teach it score ~25% higher on retention tests than people who learn it just for themselves. Pair tutoring works, but the matching part is the bottleneck.

Meanwhile the professor has no idea who's struggling on what until the midterm scores come in and it's too late. We wanted to build something that sits in the middle of all three of these gaps, the professor, student, and peer, and uses AI to coordinate them instead of replacing any one of them.

What It Does

Stitch is a classroom-scale platform with three sides:

The professor side. Upload a syllabus PDF and Stitch extracts the week-by-week concept map. Upload each week's lecture (PPTX, PDF, DOCX) and Stitch extracts 3–5 subconcepts per lecture plus grounding materials. Professors can also generate a full weekly quiz across a concept's subconcepts with one click.

The student side. Every student has a "mastery ribbon", a two-tier heatmap of concepts × subconcepts, color-coded green/amber/red. Click any weak cell and Stitch runs the matching algorithm: it scores every classmate on (a) complementary strength on your weak cell, (b) reciprocity — is there something you can teach them back, and (c) availability overlap. Then it surfaces your top 3 matches with a one-line "you'd help each other with X / they'd help you with Y" summary.

Stitch Spaces. A two-student live tutoring room where an LLM orchestrator builds a custom session plan for the pair:

If only one of you is weak on a subconcept → peer_teach. The strong student teaches, the weak one takes a quiz. If both are weak → llm_teach. Stitch writes a primer, both take the quiz. Stitches get interleaved so neither student is stuck teaching for 30 minutes straight.

The room is synchronous. Each step is a teach card, an LLM primer, or a quiz. Quizzes are authored per-stitch, grounded in the prof's actual slide bullets, and distractors are pulled from real misconceptions the slides warn about. While you're in the room you can: chat with Stitch AI about the current subconcept, Ask for a Socratic hint on any quiz question (doesn't reveal the answer, doesn't eliminate choices) get per-question feedback after submitting, etc.

How We Built It

Frontend: Next.js, TypeScript end to end, Tailwind for styling. Backend: Server actions for everything that touches data, no separate API layer, no ORM. Database + auth + realtime: Supabase. LLM layer: OpenAI GPT-4o-mini for everything, in five separate call sites: Syllabus parsing, lecture parsing, session planner, weekly quiz generator, in-room AI

Matching algorithm: custom scoring function that considers focus mastery (your weakness on the clicked cell), their strength on that cell, reciprocity (do they have a weakness you can plug), and availability overlap in hours/week.

Challenges We Ran Into

PostgREST's 1000-row cap: once we seeded 50+ students × 60+ subconcepts, the matcher started silently returning partial data, so we had to add a paginating helper and swap it in across five call sites.

Parsing PPTX and DOCX. OpenAI won't take them directly, so we had to unzip with JSZip and pulling text runs out of the slide XML before sending as plain text.

Accomplishments That We're Proud Of

We're proud of getting the full pipeline working end to end, inside the hackathon window. Stitch Spaces feel synchronous, two browsers stay in lockstep within ~100ms on step advances, card flips, and ribbon updates. And what we're most proud of: we built for both sides of the classroom. Stitch has a full professor experience (upload syllabus, upload lectures, generate weekly quizzes, author/edit content) and a full student experience (ribbon, matching, Stitch Spaces, live mastery), and both sides share one coherent data model.

What We Learned

Grounding beats cleverness. Our first instinct was to let the LLM reason freely. It drifted into generic textbook explanations. Once we forced every prompt to pull from the prof's actual slide bullets, the AI stopped feeling like ChatGPT-in-a-box and started feeling like part of the class.

Seed data changes what your product is. With 5 students our matching looked perfect. With 50+ we hit PostgREST's 1000-row cap and realized half our queries were silently truncating. Scaling the test data forced us to rewrite the data layer and surfaced real matching-score behavior that the toy cases hid.

What's Next for Stitch

Canvas / Gradescope integrations, so you can pull grades in as additional mastery signal.

Embedded voice in Stitch Spaces. Right now you jump out to Zoom or Discord for the actual conversation, but we want WebRTC built in so the room is the session, so mic, camera, and the Stitch AI orchestrator all in one pane.

Scheduled auto-matching. Right now matching is pull-based, you click a weak cell, but we want push-based too: Stitch looks at two students who are reciprocally weak/strong and whose calendars overlap Tuesday 7–9pm, and just suggests the session.

Built With

Share this project:

Updates