Inspiration

In a typical classroom of 30 students, statistically 8–12 have a learning difference that standard materials don't address. 7.3 million students in the US receive special education services, 5.3 million are English Language Learners, and dyslexia alone affects 10% of the global population — yet most lesson materials are still written once, in one format, at one reading level.

On the other side, teachers are overwhelmed. US teachers spend 8.2 hours per week on lesson preparation alone (OECD TALIS 2024), 52% report burnout — the highest among all occupations — and fewer than 1 in 5 general education teachers feel "very well prepared" to teach students with learning disabilities.

The research is clear: differentiated instruction works. A meta-analysis of 49 studies found a large effect size (g = 1.109) on learning outcomes. Students in personalized programs scored 8 points higher in math and 9 points higher in reading within one year. But creating six different versions of every lesson is simply not feasible with human effort alone.

We built LessonLens to bridge the gap between what research says works and what teachers can realistically produce given their time and training constraints.

What It Does

LessonLens transforms any uploaded lesson document (PDF, image, or slide deck) into six research-backed adaptations, each tailored to a specific learning need:

  1. Dyslexia-Friendly — 15-word sentence limits, Grade 4–5 vocabulary, chunked paragraphs, and a glossary. Based on the BDA Style Guide, Rello & Baeza-Yates (2013), and Orton-Gillingham principles.

  2. ESL / Bilingual — Bilingual scaffolding, vocabulary spotlights, sentence frames, and CEFR A2–B1 targeting. Grounded in Krashen's i+1 hypothesis, SIOP, and Cummins' BICS/CALP framework.

  3. Visual — Concept maps, timelines, diagram suggestions, and AI-generated infographics via Nova Canvas. Based on Paivio's Dual Coding Theory and Mayer's Multimedia Principles (not debunked "learning styles").

  4. Audio — Narration scripts with pacing markers ([PAUSE], [EMPHASIZE]), converted to MP3 via Amazon Polly Neural TTS. Based on Mayer's Modality and Segmenting Principles.

  5. ADHD-Optimized — Micro-cards (50 words max), quiz checkpoints, and progress tracking. Based on Barkley's Executive Function model and gamification RCTs.

  6. Gifted Enrichment — Socratic questions, What-If scenarios, and research rabbit holes at Webb DOK 3–4. Based on Bloom's Taxonomy, Kaplan's Depth & Complexity, and Renzulli's Enrichment Triad.

Teachers can further refine any adaptation with follow-up instructions, translate ESL content into 20+ languages, and generate audio narration — all from a single upload.

How We Built It

LessonLens is built on a four-stage Amazon Nova AI pipeline:

  1. Content Extraction (Nova Pro)us.amazon.nova-pro-v1:0 via the ConverseCommand multimodal API extracts complete lesson content from PDFs and images. Its 300K-token context window handles diverse classroom materials. For lengthy documents, multi-pass extraction (up to 3 passes, 10,000 tokens each) ensures nothing is lost.

  2. Intelligent Adaptation (Nova 2 Lite)us.amazon.nova-2-lite-v1:0 generates all six adaptations in parallel via SSE streaming. Each adaptation type has a 2,000+ token prompt encoding specific pedagogical requirements from peer-reviewed research. Outputs are structured JSON validated against Zod v4 schemas with auto-coercion for missing fields.

  3. Visual Diagrams (Nova Canvas)amazon.nova-canvas-v1:0 generates 512×512 educational infographics with clean styling, labeled sections, and white backgrounds — aligned with Mayer's Coherence Principle.

  4. Audio Narration (Amazon Polly) — Neural TTS (voice: Ruth, en-US) converts narration scripts to MP3. SSML markers (<break>, <emphasis>) are generated from [PAUSE] and [EMPHASIZE] tags, synthesized at 24,000 Hz with automatic chunking for scripts exceeding Polly's 6,000-character SSML limit.

Anti-Hallucination by Design

Every prompt includes Grounding Rules: only use facts from the source lesson, never invent new statistics or examples, reproduce equations exactly, and omit rather than guess. Post-processing strips confusion patterns, and Zod schema validation enforces required output structure.

Tech Stack

  • Framework: Next.js 14 (App Router) + TypeScript + Tailwind CSS
  • AI: Amazon Bedrock (Nova Pro, Nova 2 Lite, Nova Canvas)
  • TTS: Amazon Polly (Neural engine, SSML)
  • UI: shadcn/ui + Framer Motion
  • Auth: NextAuth v4 (Google OAuth + credentials)
  • Database: Prisma v7 + PostgreSQL
  • Storage: Amazon S3 (file uploads)
  • Validation: Zod v4 schemas for all AI outputs
  • Deployment: AWS EC2 + Caddy + PM2

Challenges We Ran Into

  • Structured JSON from LLMs. Getting Nova 2 Lite to consistently produce valid, schema-compliant JSON for six different adaptation formats was the hardest challenge. We solved it with detailed output format instructions in every prompt, Zod v4 validation with auto-coercion for missing fields, and post-processing to strip markdown fences and confusion patterns.

  • Anti-hallucination for education. In educational content, a single invented fact can mislead students. We engineered strict grounding rules into every prompt and validated outputs against the source material structure. The model is instructed to omit rather than guess — especially for equations, dates, and statistics.

  • Parallel streaming of six adaptations. Running six Nova 2 Lite calls in parallel and streaming each result to the client via SSE as it completes required careful orchestration with Promise.allSettled and chunked encoding. Each adaptation streams independently, so the teacher sees results appearing in real-time.

  • Multimodal document diversity. Real classroom materials range from typed PDFs to handwritten worksheets to photographed textbook pages. Nova Pro's multimodal ConverseCommand API handles this well, but we needed multi-pass extraction for longer documents to avoid truncation.

  • Prisma v7 + Node.js compatibility. Prisma v7's driver adapter architecture required Node.js 20.19+ and a different configuration pattern (prisma.config.ts with pg Pool adapter instead of a connection URL in the schema). This caused deployment issues on EC2 that required careful debugging.

  • localStorage limits for client-side storage. With six rich adaptations per lesson, a single lesson can consume significant storage. We hit the ~5MB localStorage limit, causing silent save failures and data overlap between lessons. We solved this with auto-cleanup of old lessons and save verification.

Accomplishments That We're Proud Of

  • Research-grounded, not generic. Every adaptation type is backed by specific peer-reviewed research — BDA Style Guide for dyslexia, Krashen's i+1 for ESL, Mayer's Multimedia Principles for visual/audio, Barkley's Executive Function model for ADHD, and Bloom's/Kaplan's frameworks for gifted. This isn't "make it simpler" — it's pedagogically precise differentiation.

  • Full Amazon Nova pipeline. We use four Amazon AI services in a cohesive pipeline: Nova Pro for multimodal extraction, Nova 2 Lite for parallel adaptation generation, Nova Canvas for educational diagrams, and Amazon Polly for audio narration. Each service is used for what it does best.

  • Teacher refinement loop. AI assists but doesn't replace. Teachers can provide follow-up instructions to refine any adaptation, keeping them in control of the final output.

  • Graceful degradation. The entire infrastructure stack is optional. No AWS credentials? Demo mode with pre-generated data. No database? Client-side localStorage. No Stripe keys? Free tier only. The app never crashes due to missing configuration.

  • Real-time streaming UX. Teachers see each adaptation appear as it completes, with a visual progress indicator for all six types. The SSE-based streaming makes the 15–30 second generation time feel interactive rather than like a loading screen.

What We Learned

  • Prompt engineering is pedagogy engineering. Writing effective prompts for educational content required deep research into each learning difference. A 2,000-token prompt for dyslexia adaptation encodes decades of research from the British Dyslexia Association, Orton-Gillingham, and readability studies. The AI is only as good as the pedagogical knowledge embedded in the prompt.

  • Structured output requires structured thinking. Getting LLMs to produce consistent JSON is a design problem, not just a prompting problem. Schema validation, auto-coercion, and post-processing form a defense-in-depth strategy that makes the system reliable.

  • The scale of the differentiation gap. Researching the statistics was eye-opening: 72% of 4th graders with disabilities score below basic in reading, 52% of teachers report burnout, and 2 out of 5 learning disabilities go undiagnosed. The need for tools like this is urgent and growing.

  • Multimodal AI unlocks real-world documents. Teachers don't work with clean text files — they work with scanned worksheets, photographed textbooks, and exported slides. Nova Pro's ability to understand these diverse formats is what makes the tool practical for real classrooms.

  • Amazon Bedrock simplifies multi-model orchestration. Using Nova Pro, Nova 2 Lite, Nova Canvas, and Polly through a unified AWS SDK made it straightforward to build a four-stage pipeline. Each model serves a distinct purpose, and Bedrock's API consistency made integration clean.

What's Next for LessonLens

  • Classroom analytics. Track which adaptation types are most used per student, helping teachers identify learning needs they may not have recognized.

  • LMS integration. Export adapted lessons directly to Google Classroom, Canvas, or Schoology so differentiated materials flow into existing workflows.

  • Student self-selection. Let students choose their preferred adaptation type, building self-advocacy skills while collecting anonymized preference data.

  • Batch processing. Upload an entire unit or curriculum and generate all six adaptations for every lesson at once.

  • IEP/504 alignment. Map adaptations to specific IEP goals and 504 plan accommodations, generating compliance-ready documentation alongside the adapted content.

  • Collaborative refinement. Allow special education specialists and general education teachers to co-refine adaptations, combining domain expertise with AI efficiency.

  • Multilingual expansion. Extend ESL support beyond bilingual scaffolding to full curriculum translation, leveraging Nova's multilingual capabilities for classrooms with diverse language backgrounds.

Built With

Share this project:

Updates