Inspiration
Over 60 million people in the U.S. are neurodivergent, including individuals with ADHD, autism, anxiety, and more.
Most apps and systems today assume everyone experiences the world the same way ⚙️. But that’s not true.
For many people, simple things like walking to a coffee shop ☕ can be stressful because of:
- Loud noises 🔊
- Bright lighting 💡
- Crowded spaces 🚶♂️
Around 85% of neurodivergent individuals deal with sensory overload daily, yet there are no tools that warn them ahead of time.
That’s why we built Haven 🚀.
Instead of just giving directions, Haven helps people move through the world in a way that feels comfortable, predictable, and safe 💙.
What it does
Haven is a navigation and daily support app built for people who are sensitive to their environment. It focuses on three main things:
- Finding comfortable places 📍
Haven scores locations based on:
- Noise 🔊
- Lighting 💡
- Crowd levels 🚶♂️
If a place looks overwhelming, it suggests calmer nearby options so users can make better decisions.
- Helping with tasks 🗂️
Haven includes simple tools to make tasks easier, like:
- Time blocking ⏱️
- Checklists ✅
- Body doubling (working alongside someone)
This helps users stay focused and reduce stress when completing daily tasks.
- Real-time environment awareness 👓📱
Haven uses the camera and microphone to understand what’s happening around the user in real time:
- Detects people to estimate crowd levels
- Measures noise through the mic 🎤
- Checks brightness using the camera
If things get too overwhelming, Haven speaks to the user 🗣️ and asks how they’re feeling. It can then suggest actions like taking a different route or pausing.
How we built it
- Frontend: Built with React and Next.js for a smooth UI
- Camera analysis: Uses TensorFlow.js to detect people and objects in real time
- Audio: Measures noise levels using browser audio tools
- Brightness: Estimates light levels from the camera feed
When something crosses a user’s comfort level:
- AI generates a short check-in message
- Text is converted to speech
- User responds using voice
- The system reacts accordingly
For navigation:
- Google Maps is used for directions
- Leaflet handles live tracking
- GPS updates routes in real time
We also use a backend to handle heavier processing and improve recommendations.
Challenges we ran into
Performance Running camera analysis, audio processing, and UI updates at the same time slowed things down. We had to optimize carefully to keep it smooth.
Voice interaction Getting AI → speech → user response → action to work smoothly was tricky. If one step lagged, everything broke.
Maps integration We used multiple mapping systems, and they didn’t always work the same way. Making them feel seamless took extra effort.
What we’re proud of
Everything runs in the browser....no special hardware needed 💻
- Real-time detection of crowd, noise, and lighting
- A system that actually responds and talks to the user
- Smart rerouting when environments become overwhelming
What we learned
- Browsers today are way more powerful than expected
- Combining multiple AI tools in real time is hard but possible
- Accessibility is not just about visuals — it’s about how people feel and process the world
What’s next 🚀
- We want to: Integrate with smart glasses (like Meta Ray-Bans) so the app works passively while walking Connect to wearables to track:
- Heart rate ❤️
- Body temperature 🌡️
- Stress signals
This will help Haven understand not just the environment, but also how the user is feeling in real time, making support even more personal and accurate
Built With
- antigravity
- base64imagecapture
- claudemcp
- coco-ssd
- elevenlabs
- figma
- gemini
- google-maps
- googleoauth
- loveable
- nextjs
- openai
- python
- spline
- supabase
- tensorflow.js

Log in or sign up for Devpost to join the conversation.