Inspiration

The inspiration for Metro Buddy came from a startling statistic: 1 in 10 people live with dyslexia, yet public transportation systems are built almost entirely on "text-only" logic. In high-stress, fast-paced metro environments, decoding complex station names and scrolling text boards becomes a major barrier. I wanted to build a tool that removes the "reading requirement" from urban mobility, making commute accessible to everyone, regardless of how they process information.

What it does

Metro Buddy is a neuro-inclusive navigation assistant that shifts the travel experience from "Read to Navigate" to "Listen and Go." It uses an audio-first interface where users can state their destination and receive, in return, real-time, simplified spoken guidance. Instead of text-heavy maps, the app provides "Visual Logic" , large symbolic icons, and step-by-step "action cards" that avoid information overload, such as Gate → Platform → Board.

How we built it

I built the application using Next.js and React 19 for a fast, responsive frontend. The "brain" of the app is the Google Gemini API, which is used to analyze complex transit data and re-write it into clear, dyslexia-friendly instructions. The Web Speech API was integrated for ultra-low-latency voice feedback and Framer Motion for smooth, reassuring animations. The UI was styled with a strict high-contrast design system to maximize readability.

Challenges we ran into

One of the biggest hurdles was Prompt Engineering for Accessibility. We had to ensure Gemini didn't just summarize routes, but specifically avoided complex jargon and "visual noise" in its descriptions.

Accomplishments that we're proud of

We are incredibly proud of our "Step-at-a-Time" logic. Unlike traditional maps that show the whole journey at once, Metro Buddy successfully isolates only the current necessary action, which significantly reduces cognitive load.
Most notably, I've successfully integrated an AI Vision Scanner. This feature allows users to simply point their camera at a ticket machine(which is quite tricky to be operated by dyslexic users) and obtain calming audio instruction on what buttons to be pressed I also successfully implemented a Dyslexia-Friendly Ticketing Flow, turning one of the most stressful transit interactions into a simple, verified 3-tap process.

What we learned

Building Metro Buddy taught us that Accessibility is not a feature, but a foundation. I learned deep insights into neuro-inclusive design, specifically how "Visual Redundancy" (pairing a sound with a shape) can empower a user more than a thousand words of text. With the right approach and techniques, we can make the commute system much more inclusive

What's next for Metro Buddy

Ahh well....The next phase for Metro Buddy involves Dynamic Landmark Indexing, in simple terms it's using computer vision (Gemini Vision) to identify physical landmarks (like "the big red clock" or "the coffee shop") to guide users in a better way.

Built With

Share this project:

Updates