Inspiration

Traveling is exciting, but not everyone can access destinations safely or comfortably. People with asthma, anxiety triggers, or mobility limitations have to constantly research: “Will I be safe here?”, “Can I physically access this location?”, “Will the environment trigger my condition?”

We realized that travel guides are not designed for accessibility — especially for young, independent travelers with health considerations. So we built SenseTheWorld+, a platform that personalizes travel recommendations based on individual health, mobility, and sensory needs.

What It Does

SenseTheWorld+ helps travelers make safe and empowering decisions by providing:

AI-powered chat assistant that gives personalized travel guidance based on medical & accessibility needs (powered by Gemini AI). Risk-aware activity suggestions that adapt to your profile (e.g., asthma, triggers, mobility level). Voice-controlled navigation for hands-free accessibility. Screen reader “Read Aloud mode” for visually impaired users. Community matching to find other travelers with similar accessibility needs.

How We Built It

Frontend: React + TypeScript + Tailwind + Vite Authentication & Database: Supabase (profiles, mobility levels, conditions, triggers) AI Backend API: FastAPI server calling Google Gemini 2.5 Flash / Pro Voice Interaction: Web Speech API (speech recognition + text-to-speech) Risk Engine: Custom decision logic that analyzes:

user’s mobility limitations medical conditions environmental factors (noise, altitude, air quality) regional outbreak + safety data

Challenges We Ran Into Configuring Gemini API models and ensuring correct environment variable handling. Integrating voice control smoothly without interrupting the UI. Designing a risk scoring system that feels helpful, not medical or diagnostic (important ethical balance). Ensuring the app remained visually clean while offering many accessibility features.

Accomplishments We’re Proud Of Successfully implemented context-aware AI travel guidance. Added voice navigation + “read page aloud” mode for accessibility. Built a functional matching algorithm for connecting similar travelers. Completed the entire system in one hackathon weekend.

What We Learned How to work with real-time speech recognition in the browser. How to structure a multi-feature React accessibility interface clearly. How to integrate Gemini models safely in an ethical recommendation context.

What’s Next Integrating live hospital + pharmacy locations using Maps API. Multi-language support, especially sign language avatar overlays. Adding AI-generated visual safety maps for crowds, allergens, and noise zones. Multi-language support, especially sign language avatar overlays. Adding AI-generated visual safety maps for crowds, allergens, and noise zones.

Built With

Share this project:

Updates