Inspiration

Reading is fundamental to learning and everyday life, but millions of people with dyslexia, visual impairments, autism, or ADHD struggle with traditional text. We noticed that existing accessibility tools often focused on just one condition, forcing users to switch between multiple apps or compromise on their experience.

We wanted to create a single platform that could adapt to different reading challenges. Research shows that 15-20% of the population has some form of reading difficulty - that's 1 in 5 people who could benefit from better tools. ReadEaseAI was born from the belief that accessibility should be built in from the start, not added as an afterthought.

How We Built It

ReadEaseAI is built with Next.js 15 and React 19, using Tailwind CSS for styling. We integrated OpenAI's GPT-4 for intelligent text simplification and conversational assistance. For document processing, we use pdf-parse and pdfjs-dist, and Google TTS API handles text-to-speech functionality. The UI is built with Radix UI components for maximum accessibility, and we deploy on Vercel.

We designed four specialized reading modes. The Dyslexia mode uses the OpenDyslexic font with adjustable spacing and color overlays. The Visual Support mode provides text-to-speech and an AI assistant for questions. The Autism mode simplifies text structure with predictable layouts and visual supports. The ADHD mode chunks content into manageable pieces with active highlighting and progress tracking.

Each feature is based on research about cognitive accessibility. For example, we learned that increased letter spacing can improve reading speed by 20% for dyslexic readers, and breaking text into smaller chunks helps maintain focus for people with ADHD.

What We Learned

Building truly accessible software is complex. It's not just about screen readers - it requires understanding diverse cognitive and sensory needs. We learned that AI text simplification is tricky; you need to preserve meaning while reducing complexity. Getting the prompts right took significant experimentation.

We also discovered that performance matters more for accessibility than we initially thought. Smooth, responsive interfaces are critical for users with attention difficulties. We learned about font loading optimization, browser compatibility issues with speech APIs, and the importance of giving users granular control over their experience.

Most importantly, we realized that accessibility benefits everyone. Even people without diagnosed conditions appreciate these features when they're tired, stressed, or reading in a non-native language. Universal design really does help all users.

Challenges We Faced

PDF processing was initially too slow for large documents. We solved this with lazy loading and Web Workers to process pages off the main thread. Browser compatibility for text-to-speech was inconsistent, so we added Google TTS as a reliable fallback.

Managing AI costs was another challenge. We implemented caching and batching strategies to reduce API calls by about 70%. Balancing text simplification with accuracy was difficult - over-simplifying sometimes lost important nuance. We added a simplification slider so users can control the level that works for them.

TypeScript type safety with dynamic PDF content required creating comprehensive type definitions. Making responsive designs work across four different reading modes took careful planning with Tailwind utilities.

Impact

ReadEaseAI makes reading accessible for people with different cognitive and sensory needs. We're planning mobile apps, a browser extension, offline functionality, and multi-language support to reach even more users and create a more inclusive digital world.

Built With

Share this project:

Updates