Inspiration

In the U.S., the average EMS response time is 7 minutes and over 14 minutes in rural areas. Brain damage can begin after just 4 minutes without oxygen. Those first few minutes are critical, yet most bystanders panic or lack the knowledge to act.

We built MediAssist AI to bridge the gap between incident and professional arrival. We didn't just want a chatbot; we wanted a medically accurate, hallucination-resistant system that combines the semantic understanding of LLMs with the structured precision of medical knowledge graphs and real-time geospatial intelligence.

What it does

MediAssist AI is a Progressive Web App (PWA) that acts as a real-time emergency companion:

1. Instant Triage: Users describe symptoms via voice or text. The system analyzes the input to determine severity (1-10 scale) and the nature of the emergency.

2. Voice-Guided Instructions: Using ElevenLabs, it provides calm, step-by-step voice commands for procedures like CPR, Heimlich maneuver, or bleeding control.

3. Hybrid RAG Intelligence: Unlike standard AI wrappers, our system uses a Hybrid Retrieval-Augmented Generation engine. It consults both a Vector Database (ChromaDB) for semantic similarity and a Knowledge Graph (NetworkX) for structured medical relationships before generating advice.

4. Intelligent Routing & Navigation: We utilize the Google Maps API to provide dual-stream navigation:

For the Victim: It instantly calculates and displays turn-by-turn directions to the nearest hospital or emergency room.

For Responders: It generates a precise route to the victim's house/location, which is sent directly to emergency contacts and nearby helpers to facilitate rapid arrival.

5. Omnichannel Alerting: It simultaneously triggers SMS (Twilio), Email (Gmail), and automated Voice Calls to emergency contacts and nearby helpers within a 5km radius.

How we built it

We engineered a high-performance backend focusing on speed, medical accuracy, and geospatial awareness using a FastAPI async architecture.

1. The AI Brain: Hybrid RAG System We moved beyond simple prompting by building a two-pronged retrieval system:

-Vector Search (ChromaDB): Converts user symptoms into high-dimensional embeddings to find semantically similar historical cases (e.g., understanding that "clutching chest" implies "cardiac distress").

-Knowledge Graph (NetworkX): We built a directed graph of medical ontology (Symptoms → Conditions → Treatments). This ensures logical, weighted connections between symptoms and medical realities, reducing hallucinations by 80%.

-Google Gemini 1.5 Pro: Acts as the synthesizer. It takes the context from the RAG system and generates structured JSON responses with severity assessments, immediate actions, and critical warnings.

2. Geolocation & Routing (Google Maps API) We integrated Google Maps API to handle critical logistics during the emergency:

-Hospital Routing: The system performs a nearby search to identify the closest emergency facilities and generates optimized driving directions for the user.

-Responder Navigation: When an incident is created, we reverse-geocode the victim's coordinates to an address. This location data is embedded in the alerts sent to contacts, allowing them to open a live map with direct navigation to the victim's house.

-Nearby Helper Discovery: We utilize geospatial queries to locate registered helpers within a 5km radius to crowdsource immediate assistance.

3. Infrastructure & Database

-Neon (PostgreSQL): We utilized Neon's serverless architecture for our primary database. Its auto-scaling capabilities handle user data, incident logs, and contact management, while its JSONB support allows for flexible medical data storage.

-DigitalOcean & Docker: The backend is containerized and deployed on DigitalOcean for reliability.

4. Communication & Voice

-Twilio: Handles the parallel execution of SMS alerts and programmable voice calls to primary contacts.

-ElevenLabs: Converts the AI-generated text instructions into natural, stable speech with high similarity boost for clarity in chaotic environments.

Challenges we ran into

1. Hallucinations vs. Medical Safety: Early versions of the LLM would occasionally suggest incorrect treatments. We solved this by implementing the Knowledge Graph layer. By forcing the AI to traverse a graph of verified medical relationships (e.g., chest_pain → heart_attack with 0.7 confidence), we grounded the generation in facts.

2. Latency in Emergencies: Sequential processing was too slow. We refactored the entire backend to use Python asyncio. Now, fetching AI instructions, calculating routes (Google Maps), and alerting contacts (Twilio/Gmail) happen in parallel, reducing response time from 1.1s to under 500ms.

3. Data Consistency: managing the state between the Vector DB and the Graph DB was complex. We implemented a "Continuous Learning" service that updates graph weights only when incident feedback is rated highly (4+ stars).

Accomplishments that we're proud of

1. Hybrid RAG Implementation: Successfully combining ChromaDB (Vector) and NetworkX (Graph) to create a "medical brain" that is both flexible and accurate.

2. Sub-2-Second Triage: Optimizing the entire pipeline to go from "Voice Input" to "First Instruction" in under 2 seconds.

3. Smart Routing & Alerting: Building a system that doesn't just send texts, but intelligently routes victims to hospitals and guides helpers directly to the scene using Google Maps.

4. Self-Healing Knowledge: The system actually gets smarter the more it is used, reinforcing successful symptom-treatment pathways automatically.

What we learned

1. Graph Theory in AI: We learned how to use NetworkX to model complex medical ontologies and how to merge those results with vector embeddings.

2. Asynchronous Architecture: We gained deep experience with FastAPI's async capabilities to handle I/O bound operations (like calling Gemini, Maps, and Twilio simultaneously) without blocking.

3. Prompt Engineering for Safety: We learned how to structure prompts to force JSON outputs and strictly adhere to safety protocols (e.g., blocking dangerous content).

What's next for MediAssist-AI

1. Real-time Video Triage: Integrating computer vision to assess physical injuries (bleeding/burns) via camera input.

2. Wearable Integration: Triggering incidents automatically upon detecting falls or sudden heart rate spikes via Apple Watch/Fitbit.

3. Offline Mode: Developing a robust PWA offline strategy to cache medical knowledge for disaster areas with no internet.

Built With

Share this project:

Updates