Inspiration The internet is often a chaotic, overstimulating place. For neurodivergent individuals—particularly those with ADHD or Autism—dense layouts, flashing ads, and complex navigation can lead to immediate sensory overload. We realized that while "Dark Mode" exists, there is no "Calm Mode" that adapts to a user's actual mental state in real-time. We wanted to build a "Liquid UI" that reshapes itself based on the user's "vibe."
What it does Neuro-Vibe is a dynamic accessibility engine. It uses a webcam-enabled "vibe check" to detect signs of cognitive load or stress in a user's facial expression. Once detected, it triggers Gemini 3 to perform "Vibe Coding"—instantly generating and injecting custom CSS and JS into the webpage.
Visual Sanitization: It expands white space and removes distracting banners.
Content Distillation: It summarizes long, overwhelming paragraphs into scannable bullet points.
Adaptive Typography: It switches fonts to highly readable, neuro-friendly styles.
How we built it We leveraged Gemini 3's native multimodality to create a seamless feedback loop.
Gemini 3 Flash: Handles high-frequency, low-latency facial inference to detect "stress" vs. "calm" states nearly instantaneously.
Gemini 3 Pro: Acts as the "Architect," analyzing the page's DOM and generating the generative CSS/JS "skins" needed for the transformation.
Frontend: Built with React and Tailwind CSS, simulating a browser environment where the "Liquid Design" can be demonstrated in real-time.
Challenges we ran into The primary challenge was proving "intelligence" over "hardcoded" states. We addressed this by building a "Glass Box" Debugger that shows Gemini’s raw reasoning and the live-generated code it injects. Balancing the latency of video processing with the need for immediate UI response required optimizing our API calls to use Gemini 3 Flash for the initial detection.
Accomplishments that we're proud of We successfully created a UI that doesn't just look different, but feels different. Seeing a cluttered news site transform into a soothing, readable document purely through a facial expression felt like a breakthrough in Affective Computing.
What we learned We learned that multimodal AI isn't just about chatbots; it's about making software that empathizes with the user's biological state. We also discovered the incredible efficiency of Gemini 3 Flash for real-time agentic workflows.
What's next for Neuro-Vibe We plan to move beyond a demo environment and develop a full Chrome Extension. We also want to integrate audio-vibes, where Gemini generates ambient soundscapes to further assist in grounding users during high-stress browsing sessions.
Built With
- css3
- docker
- express.js
- facerecognition
- faiss
- fastapi
- firebase
- firestore
- geminiapi
- google-cloud
- html5
- javascript
- mongodb
- node.js
- pinecone
- python
- pytorch
- react
- speechtotext
- tailwindcss
- tensorflow
- typescript
- vectordatabase
Log in or sign up for Devpost to join the conversation.