We met Osvaldo on a train — a third-year at San Jose State with central vision loss and neurodivergence. He told us he spends more energy navigating the web than actually using it. He's not alone. 1 in 5 people are neurodivergent, and for them, the average website isn't neutral — it's actively hostile. Percept exists because of that conversation.

What it does 

Percept is a browser extension with a voice agent that reshapes any website in real time through natural conversation. You tap the mic, say what you need, and the page responds — instantly, on any site. It starts with a 90-second diagnostic that builds your perceptual profile, then pre-adjusts every site you visit through it. Voice commands handle the rest: "too bright," "single column," "hide the ads," "read me the first result.” 

How we built it

A React prototype with a CSS injection engine that translates natural language commands into real-time page transformations. The diagnostic stores a five-dimensional perceptual profile that auto-applies on load. Production ships as a Chrome extension with a content script, popup UI, and persistent profile store.

Accomplishments 

The demo is real — not a mockup. Every command executes a live transformation. We're proud of the diagnostic, the framing (cognitive interface layer, not accessibility software), and the fact that every design decision had a clear filter: does this help Osvaldo right now?

What's next Chrome extension ship, then a self-improving profile that learns from behavior across sessions. Vertical expansion into travel booking, healthcare portals, and financial tools — any high-stakes task where cognitive overload causes real harm. The market is anyone with a browser. We started with the users who needed it most.

Built With

  • api
  • mode)
  • react-18-?-typescript-?-vite-?-tailwind-css-?-express-?-node.js-?-openai-gpt-4o-?-openai-structured-outputs-(json-schema
  • speech
  • strict
  • web
Share this project:

Updates