Inspiration
I wanted to create a tool that makes web content accessible to everyone — especially visually impaired users who struggle to read text on screens. That’s how JustHear AI was born — a Chrome extension that listens, translates, summarizes, and proofreads webpages with both manual and voice command controls.
What it does
JustHear AI helps users hear, understand, and interact with any webpage through sound. It reads aloud, translates text into multiple languages, summarizes long content, and proofreads grammar. It can be used manually or hands-free through voice commands such as: 👉 “Read,” “Translate,” “Stop,” and “Summarize.”
This makes it especially useful for: Visually impaired users Language learners Anyone preferring audio over reading
How we built it
I built JustHear AI completely on my own using: HTML, CSS, and JavaScript for UI and logic Chrome Extension APIs for browser integration Chrome Built-in AI (Gemini Nano) for on-device AI Gemini API as an online fallback Web Speech API for speech-to-text and text-to-speech
Challenges we ran into
Enabling voice recognition with smooth command response. Handling multilingual audio output for translation. Maintaining offline capability while using built-in AI.
Accomplishments that we're proud of
Created a fully accessible browser tool for visually impaired users. Integrated Chrome’s Gemini Nano AI successfully. Designed an intuitive interface with both manual and voice control. Built and tested everything solo.
What we learned
Chrome’s on-device AI (Gemini Nano) capabilities. Managing voice input/output for accessibility. Balancing AI processing between offline and online modes.
What's next for JustHear AI
Adding voice-based navigation for all web elements. Customizing voice type and reading speed. Publishing to the Chrome Web Store for public use.
Built With
- chromeextensionapis
- css
- geminiapi
- gemininano
- html
- javascript
- webspeechapi
Log in or sign up for Devpost to join the conversation.