Inspiration
Imagine living without stable shelter, limited access to clean water, and no easy way to find basic healthcare. For thousands of unhoused individuals, this is the daily reality. Pierce’s volunteering experience with organizations supporting the homeless revealed a critical gap: while technology had advanced in many sectors, access to basic health support for the homeless had stayed stagnant. We envisioned a goal where healthcare help could come to anyone, anywhere, in the simplest form possible - through natural conversation and AI-driven guidance. That's how SnapAid was born, inspired by the idea of a real-world Baymax: an accessible, voice-driven healthcare companion that can offer real-time triage, resource navigation, and life-saving support to the people who need it most. We didn’t want to build just another app - we wanted to build a tool that could meet people where they are, no matter their device, situation, or stress level.
What it does
SnapAid empowers vulnerable individuals by delivering quick, empathetic healthcare triage and critical resource discovery through voice and image-based interactions. Users can speak naturally or upload a photo of an injury or condition. SnapAid’s backend uses multimodal AI (Google Gemini 2.5 and OpenAI APIs) to analyze symptoms, determine severity, and recommend actionable next steps - whether that's self-care advice, finding an open clinic, or simply locating the nearest restroom or water station.
Beyond triage, SnapAid also:
Locates clinics, pharmacies, shelters, water stations, and restrooms in real-time.
Filters options based on open hours, distance, and type of resource.
Provides dynamic Google Maps routing to those resources.
Tracks evolving symptoms during a session to identify worsening conditions - all while protecting user anonymity.
SnapAid is designed to work easily even from public kiosks, shared devices, or basic smartphones - removing barriers many unhoused individuals face today.
How we built it
SnapAid was the result of careful system design, aiming for speed, resilience, and real-world usability:
We built the backend using FastAPI (Python) to orchestrate different AI services and ensure fast, reliable API endpoints.
The frontend was designed using TypeScript to offer simple interaction patterns for both text and voice inputs.
Google Generative AI (Gemini 2.5) powered both conversational understanding and vision-based symptom triage.
OpenAI APIs supported enhanced text-to-speech output and supplemental image analysis for high accessibility.
Google Maps API helped in dynamic routing and locating live nearby resources.
LA Open Data APIs provided critical public information on clinics, restrooms, and shelters, cleaned and normalized for consistency.
Ngrok enabled rapid testing and public exposure of local APIs during development.
Lens Studio was explored for future AR integration, envisioning a world where users could visually "see" help around them in real-time.
We implemented an intelligent semantic mapping layer to translate freeform user prompts ("I feel sick", "Where can I sleep?") into actionable triage or search queries.
From orchestrating multiple services to handling file transfers and maintaining low-latency conversations, every layer of SnapAid was built to be fast, intuitive, and field-ready.
Challenges we ran into
Working on SnapAid pushed us to overcome some serious technical and design hurdles. First, building a semantic mapping engine wasn’t trivial. Real-world language is chaotic - people rarely say things perfectly. Designing a system that could flexibly translate casual, urgent, or emotional language into structured actions was an ongoing challenge. Managing multimodal latency was perhaps the hardest technical battle. With vision models, conversation models, and location lookups happening simultaneously, we had to aggressively optimize our pipelines to maintain a natural, fluid user experience. Finally, designing for extreme accessibility forced us to rethink many assumptions. Voice input wasn't enough; flows had to work under noise, poor connectivity, fatigue, or emotional stress. Keeping the user interface frictionless became just as important as AI accuracy. Despite these challenges, our shared belief in SnapAid’s mission kept pushing us forward.
Accomplishments that we're proud of
We are proud that SnapAid successfully integrates conversation, vision, real-world mapping, and accessibility-first design into a single seamless system. We successfully leveraged AI to provide compassionate healthcare triage outside traditional clinics - meeting people where they are. We were also proud of our work on semantic mapping, teaching the system to intelligently handle messy, emotional human inputs - a feature we believe is critical for making AI truly useful in emergency or underserved contexts. Our exploration into future AR navigation using Lens Studio planted seeds for an even more immersive future where finding help becomes as simple as "looking around." Most of all, we’re proud that SnapAid stayed true to its mission: providing a real lifeline for those who often have none.
What we learned
SnapAid taught us that building real-world systems for vulnerable populations requires much more than technical skill - it demands empathy, resilience, and humility. We learned that semantic understanding is a core necessity if you want real users, in chaotic situations, to trust and interact with your system naturally. We learned firsthand how dirty or unreliable public data can undo even the best models, emphasizing the need for constant validation, normalization, and redundancy planning. We learned that latency in multimodal AI compounds quickly, and unless rigorously optimized, even a few seconds of delay can break trust in live environments. On the accessibility front, we realized that good tech makes tasks easier, faster, and emotionally safer for users under stress. Finally, we learned that building trust takes one mistake to break and many good experiences to build, and that trust must be protected at every step from privacy to reliability to tone of voice.
What's next for SnapAid
Looking ahead, we are excited to expand SnapAid’s capabilities and impact. We want to enhance the semantic mapping engine, making it even smarter at interpreting real-world language variations and implied urgency, to create a more sensitive and responsive assistant. We also aim to build robust offline-first support, enabling SnapAid to function even when public WiFi or mobile signals are unstable. Prototyping AR navigation overlays is another top priority. We believe that allowing users to visually locate nearby clinics, shelters, or water sources through an AR interface would make finding help intuitive even without strong reading or navigation skills. Above all, we see SnapAid as the start of a larger mission - building healthcare tools that are available, empathetic, and trustworthy for all, especially those the world often overlooks.
Built With
- fastapi
- google-generativeai
- google-maps
- javascript
- laopendata
- lens-studio
- ngrok
- openai:texttospeech
- openai:visionapi
- python
- queryeasywaxapi
- typescript


Log in or sign up for Devpost to join the conversation.