Inspiration
Growing up, one of our team members was told he was stupid because he couldn't read like everyone else. He wasn't stupid — his brain was just wired differently. Through years of practice, he managed to overcome his dyslexia symptoms by the age of 14. But not everyone has that time, those resources, or that support.
780 million people worldwide live with dyslexia. 40% of them experience anxiety. 60% face depression. And the employment rate for adults with learning disabilities sits at just 48%. Yet every accessibility tool on the market applies one simplification method and calls it a day, as if every dyslexic brain works the same way. They don't.
We built Qlarity because we believe reading shouldn't be a disability.
What it does
Qlarity is a quantum-inspired AI agent that transforms any document into the reading format your dyslexic brain understands best.
Upload anything (a photo of a textbook page, a PDF, a screenshot, a webpage) and Qlarity generates multiple comprehension strategies in parallel: short sentences, bullet breakdowns, vocabulary-substituted versions, step-by-step logical flows, and audio-first explanations. Think of it as a quantum superposition of reading formats.
Users start with a quick reading calibration that helps the agent understand how they process text. Based on this and ongoing interaction signals — which version they spend time on, replay, or skip — the agent collapses to the optimal format. Over time, it builds a cognitive profile and adapts future content automatically.
For businesses, Qlarity isn't just an accessibility tool: it's a productivity engine. One employee processing 50 documents a week saves 4.5 minutes per document. At €40/hour, that's a 60x return on investment. In a company of 500, where roughly 100 employees are affected by dyslexia (most of them undiagnosed), that's 375 hours of hidden productivity unlocked every week.
How we built it
- Ruby on Rails for the web application and core pipeline
- Gemini API for multimodal input processing — extracting text from photos, PDFs, and screenshots
- Claude API for reasoning across cognitive transformation strategies and generating the multiple reading formats
- ElevenLabs API for natural text-to-speech on the audio-first format
- BMAD Method (Breakthrough Method for Agile AI-Driven Development) to structure our workflow with specialized AI agents across planning, architecture, and implementation
- Paid.ai integration for value instrumentation and ROI tracking
- A custom ROI dashboard that calculates and visualises time saved, cost per document, and return on investment in real time
- A Chrome extension that brings Qlarity's reading transformation to any webpage
Challenges we ran into
Building a multi-format AI agent in under 24 hours meant making hard scoping decisions constantly. The signal tracking system — monitoring how users interact with each reading format to determine the best one — was the most technically challenging piece. We had to balance building something genuinely adaptive versus something we could actually demo reliably.
Getting the transformation prompts right was harder than expected. It's not enough to just "simplify" text — different types of simplification help different people. Tuning the prompts so each format was genuinely distinct and useful required real understanding of how dyslexia affects reading, not just surface-level assumptions.
Integrating multiple APIs (Gemini, Claude, ElevenLabs, Paid.ai) into a single coherent pipeline on Rails, with all of them needing to work in sequence, created coordination challenges across the team. Getting the reading calibration test to meaningfully influence the format selection added another layer of complexity.
We also dealt with the reality of hackathon life — zero sleep, team disagreements on scope, and the constant tension between "make it work" and "make it impressive." We had to cut features we were excited about to focus on a demo that actually told a clear story.
Accomplishments that we're proud of
In under 24 hours, we built a fully functional AI agent that takes any document input, runs a reading calibration, generates multiple cognitively distinct reading formats, tracks user interaction, and adapts accordingly.
We also shipped a Chrome extension, meaning Qlarity doesn't just work on uploaded documents — it works on any webpage. That takes it from a demo to something genuinely usable.
The quantum-inspired framework isn't just branding. The superposition-collapse model genuinely describes how Qlarity works: explore multiple comprehension paths in parallel, then converge on the optimal one through real signals. We're proud that the metaphor holds up technically.
Most importantly, this project is personal. It started from a real experience with dyslexia and we built something we genuinely believe could help millions of people read better every day.
What we learned
That accessibility and business value aren't opposites — they're the same thing. A tool that helps dyslexic employees read faster doesn't just improve their lives, it directly impacts a company's bottom line. The 60x ROI wasn't a number we invented for the pitch — it fell out of the maths naturally.
We also learned that building for accessibility forces better design for everyone. The UI principles we followed for dyslexia (clear fonts, generous spacing, minimal clutter, multiple format options) make the product better for all users, not just dyslexic ones.
On the technical side, we learned how powerful it is to combine multiple AI APIs into one coherent agent. Gemini handles the eyes (multimodal input), Claude handles the brain (reasoning and transformation), and ElevenLabs handles the voice. Each one does what it's best at rather than forcing one model to do everything.
Tracks and challenges
- Agentic AI Track (Paid.ai): Qlarity is an autonomous agent that calibrates to the user, generates multiple strategies, selects the optimal one, and learns over time — all without manual intervention.
- Best Use of Claude: Claude powers the core reasoning engine, generating cognitively distinct reading transformations and adapting strategies based on user profiles.
- Best Use of ElevenLabs: Every reading format has a natural voice output via ElevenLabs, making Qlarity accessible to users who process information better through listening.
- Best "Built on Rails" Project: The entire application and API orchestration pipeline is built end-to-end on Ruby on Rails.
- Best AI-Powered Healthcare Solution: Dyslexia is a neurological condition affecting 780 million people. Qlarity makes reading accessible and reduces the anxiety and depression associated with daily reading struggles.
- Best Use of Data: Qlarity's reading calibration test and ongoing interaction signals generate real behavioural data that drives the format selection and builds personalised cognitive profiles over time.
- Best Use of Paid.ai: Our ROI dashboard, powered by Paid.ai, tracks cost per document, time saved, and return on investment — proving the measurable business value of an AI accessibility tool.
What's next for Qlarity
- Deeper signal tracking — integrating eye tracking and more granular reading behaviour analysis to make the collapse phase even more precise.
- Enterprise deployment — a B2B product where companies can roll out Qlarity across their workforce, with aggregate dashboards showing organisation-wide productivity gains and accessibility compliance.
- Expanded cognitive profiles — extending beyond dyslexia to support ADHD, visual processing disorders, and ESL readers who face similar comprehension challenges.
- Mobile app — camera-first input for on-the-go document transformation, built for students and professionals who need Qlarity outside the office.
Built With
- claude
- css3
- elevenlabs
- gemini
- html
- javascript
- langsmith
- paid.ai
- pixart
- ruby-on-rails
Log in or sign up for Devpost to join the conversation.