Axessify is an accessibility-first study platform. Most people don't realize it, but the features you already love like dark mode, captions, voice assistants were all originally built for people with disabilities. They became defaults because accessible design is good design. So we flipped the model: instead of adding accessibility at the end, we built a website that is made for visually impaired learners, students with ADHD, and other neurodivergent users and it works better for everyone. Axessify made for some, adopted by everyone

Working in disability resources made it clear that accessibility is usually treated as an afterthought. It was something added only to meet requirements. But in reality, many features like dark mode, captions, voice assistants, and keyboard shortcuts were originally designed for accessibility.

That raised a question in our minds: What if accessibility wasn’t the add-on, but the foundation?

Axessify was built from that idea. We wanted to create a platform where students with disabilities are not accommodated later, but prioritized from the start.

Axessify is built for people with disabilities. Axessify is designed for students with dyslexia, ADHD, low vision, and different learning needs.

It allows users to: Chat with an AI tutor Upload and study documents with AI assistance Generate summaries, quizzes, flashcards, and step-by-step guides

Customize their reading experience with tools like dyslexic fonts, line focus, spacing, and themes Instead of forcing users to adapt to the platform, Axessify adapts to the user. Axessify creates a personalized and flexible study experience that works for different brains and workflows.

We built Axessify using: React + Vite + TypeScript for a fast, responsive frontend Firebase (Auth, Firestore, Storage, Functions) for backend infrastructure Gemini API for AI-powered chat, document understanding, and study artifact generation

We also implemented: Document chunking and embeddings for retrieval-based AI responses Structured parsing for interactive quizzes and flashcards Accessibility-first UI controls integrated directly into the study workflow

One of the biggest challenges was integrating Firebase with AI workflows, especially: Handling document processing pipelines (upload → chunk → embed → retrieve) Managing API limits and error handling for Gemini Keeping responses fast while still being context-aware

Another major challenge was designing for real accessibility, not just surface-level features. Disabilities like ADHD and autism vary widely, so there is no one-size-fits-all solution. We had to rethink how users interact with content and build flexible systems rather than fixed features.

We’re proud that this project is informed by real experience working with people with disabilities, not just assumptions or checklists. That perspective shaped how we designed everything from reading tools to study workflows. We built Axessify with a genuine understanding of accessibility challenges, not just surface-level features. This helped us focus on what actually matters for users with dyslexia, ADHD, and low vision. Instead of guessing what users need, we approached the problem with empathy and firsthand insight, which made the platform more practical and meaningful.

We learnt how complex disabilities are. Working with different tools was a challenge we expected, but understanding neurodivergence is a whole another challenge. Autism and ADHD is so varied that there is no one fit solution for all. We needed AI to come up with a personalized plan for each individual.

Axessify’s Future goal is to make learning truly inclusive. V2 Plan: Screen reader integration Speech-to-text input Real-time text-to-speech improvements Optional live ASL interpreter integration Smarter personalization using AI-driven learning profiles V3 Plan: Adaptive learning paths based on user behavior Collaboration tools for group studying Accessibility insights dashboard for educators Mobile-first experience V4 Plan: Axessify evolves into a next-generation document reader, designed to replace traditional PDF viewers for studying. Instead of static reading, documents become interactive and adaptive: Fully integrated AI alongside every document (no switching tabs) Real-time simplification, summaries, and explanations inline Personalized reading modes that adapt automatically to user needs Built-in accessibility as the default, not a setting

Interactive elements like quizzes, flashcards, and guided steps directly embedded in documents The goal isn’t just to view PDFs. It's to transform them into active learning environments.

App preview: link

Github: link

Share this project:

Updates