Inspiration

Late one night, while struggling to stay focused on dense online content, we realized something important: the problem wasn’t just difficulty—it was friction. For neurodivergent individuals with ADHD, ASD, or Dyslexia, the web isn’t designed for how they think, process, or feel.

Existing tools are static—they don’t adapt in real time. We wanted to build something that doesn’t just assist, but understands. That’s how Neuro-Vibe Assistant was born: an intelligent companion that senses user state and dynamically reshapes the web experience.

What it does

Neuro-Vibe Assistant is a neuro-adaptive Chrome extension that personalizes web content in real time using multimodal AI.

🔹 Real-time Adaptation Detects emotional and cognitive states (focus, stress, overload) and adjusts UI instantly.

🔹 Multimodal Intelligence Uses audio, video, and screen input simultaneously to understand user context.

🔹 🧠 Simplify & Explain (Core Feature)

Converts complex webpages into: 📌 Summary 🔹 Key Points 🧩 Simple Explanation Two modes: Basic and Very Simple Includes built-in text-to-speech

🔹 Personalized Accessibility Support

ADHD → Focus mode, distraction blocking ASD → Calm UI, reduced sensory input Dyslexia → Readable fonts, spacing, audio assist

🔹 Sensory Overload Protection Automatically triggers dark mode, mutes tabs, and reduces stimuli when stress is detected.

How we built it

We combined cutting-edge AI with lightweight web technologies:

AI Engine: Gemini 2.0/2.5 Multimodal Live API (WebSocket-based real-time processing) Vision Processing: MediaPipe Face Landmarker for biomarker detection Frontend: HTML5, CSS3, JavaScript (Vanilla ES6) Extension Framework: Chrome Extension Manifest V3

💡 Key innovation: We built a real-time feedback loop where user state → AI analysis → UI adaptation happens continuously.

Challenges we ran into

⚡ Real-time latency Processing multimodal streams without lag was difficult. We optimized WebSocket handling and event throttling.

⚡ Accurate biomarker detection Facial cues aren’t always reliable. We had to balance sensitivity vs false triggers.

⚡ Content extraction Webpages are messy. Filtering useful content (without ads/navigation noise) required custom parsing logic.

⚡ User privacy concerns Handling camera/mic input responsibly was critical—we ensured all processing is transparent and user-controlled.

Accomplishments that we're proud of

🏆 Built a fully working real-time neuro-adaptive system (not just a concept) 🏆 Integrated multimodal AI + UI adaptation seamlessly 🏆 Created a high-impact accessibility tool aligned with real-world needs 🏆 Delivered a polished Simplify feature with structured outputs + TTS 🏆 Designed for multiple neurodivergent conditions in one unified solution

What we learned

📚 Real-time AI systems require careful trade-offs between speed and accuracy 📚 Accessibility isn’t one-size-fits-all—it must be dynamic and personalized 📚 Multimodal input (vision + audio + context) is far more powerful than single-input AI 📚 User trust and transparency are just as important as technical performance

What's next for Neuro-Vibe

🚀 Mobile & cross-platform support (beyond Chrome) 🚀 Personalized learning models that adapt over time to each user 🚀 Offline lightweight AI mode for privacy-first usage 🚀 Integration with education platforms (LMS, YouTube, e-learning tools) 🚀 Community feedback loop to improve neurodivergent support patterns 🚀 Clinical validation partnerships to enhance accuracy and impact

Built With

Share this project:

Updates