Inspiration
In our team, reviewing long, generative-AI answers has become a daily routine. We quickly noticed a productivity gap:
| Metric | Faster Reader | Slower Reader |
|---|---|---|
| Time to understand 1,200 J-chars | 1 min | 3 min |
| ChatGPT iterations in 30 min | 30 | 10 |
That 3× gap directly lowers discussion quality and output.
Existing speed-reading tools didn’t help:
- Built for paper books
- Focus on raw speed, ignore comprehension
- Require in-person classes—no remote/mobile support
So we designed AI Reading Lab, a phone-first service that trains
reading speed × comprehension together.
What It Does
- 1-minute micro-drills measure eWPM (effective WPM = speed × comprehension).
- Three skill tracks
- Find – rapid information extraction
- Grasp – key-point recognition
- Link – context alignment & logic checking
- Find – rapid information extraction
- Uses live generative-AI answers as training material for real-world relevance.
- Progress dashboard visualises strengths/weaknesses and a rule-based AI coach suggests the next drill.
How We Built It
| Layer | Tech / Service | Notes |
|---|---|---|
| Front-end | React 18, Vite, Tailwind CSS, shadcn/ui | Mobile-first |
| Routing & Forms | React Router DOM · React Hook Form + Zod | |
| Back-end | Supabase (PostgreSQL, Auth, Edge Functions) | |
| Payments | Stripe Checkout | |
| AI Workflow | Claude Sonnet 4 via Cursor → automatic generation & QA of drill content | |
| Coaching Logic | Currently rule-based (threshold & ranking rules) | |
| Dev Platform | Bolt.new for scaffold & live preview (badge visible) | |
| Hosting & CI/CD | Netlify | |
| Dev Tools | Cursor IDE, GitHub, ESLint/Prettier |
Key design choices:
- Prompt-encoded problem patterns → Claude generates drills and self-tests them.
- eWPM algorithm combines time, accuracy, and per-question stats to surface precise weak spots.
- 6-tier rank (E → S) auto-adjusts difficulty as users improve.
Challenges We Ran Into
- Redefining “speed-reading” for the AI era – eye-movement drills and archaic prose didn’t map to structured AI text.
- Difficulty tuning – raising complexity by structure, not content level, required several prompt iterations.
- Solo-developer scale – automating generation + testing with AI was essential to stay feasible.
Accomplishments We’re Proud Of
- Breakthrough: fully automated problem generation + unit tests with prompt templates and Claude.
- Implemented reliable eWPM metric—speed and comprehension in one number.
- Built a foundation where one person can continuously expand content without quality loss.
What We Learned
- For large tasks, “Document → Generate → Human-review” loops with AI keep scope aligned.
- Automated tests are non-negotiable—prevent “it worked yesterday” regressions.
- Mobile-first UX plus data-driven feedback dramatically boosts learner motivation.
What’s Next for AI Reading Lab
- Stronger AI coach
- As the drill corpus grows, switch from rule-based to fully generative personalised advice.
- As the drill corpus grows, switch from rule-based to fully generative personalised advice.
- Content expansion
- More patterns → finer-grained weakness detection.
- More patterns → finer-grained weakness detection.
- Kids Edition
- Tailor drills and UI for elementary students to build early reading fluency.
- Tailor drills and UI for elementary students to build early reading fluency.
- Road-map
- Multi-language support, team/enterprise dashboards, and subscription tiers.

Log in or sign up for Devpost to join the conversation.