About the Project

Inspiration
Emergency response systems are reaching a breaking point. In New York City, dispatchers are often overwhelmed by more than 5,000 calls a day, while 60% of EMS agencies nationwide report critical staffing shortages. In the UK, people can wait nearly fifty minutes for a heart attack response, and in rural Nepal, a doctor might be a half-day trek away. This strain leaves a dangerous "Bystander Gap" during the moments when it matters most, especially since over 70% of cardiac arrests happen at home. We built Respondr to bridge this divide. Our platform uses real-time wearable vitals and one-click video consults to turn bystanders into lifesavers. Because when survival rates can increase by 70% if action is taken within just two minutes, the world can't afford to wait. As a team of four students from Nepal, we have seen firsthand how the lack of nearby clinics or available ambulances can turn a treatable incident into a tragedy. By streaming live vitals like heart rate and oxygen levels from wearables directly to a professional, we take the fear and guesswork out of the situation for a bystander who might otherwise freeze. Through a one-click video call, a doctor can see the real-time data and provide the exact guidance needed to perform first aid or use an AED before EMS ever arrives. We want to make sure that in those critical minutes, no one has to face an emergency alone.

How We Built It
We built Respondr as a real‑time platform with a React + Vite frontend in TypeScript and a Python FastAPI backend. Supabase provides authentication and Postgres storage; the client uses the Supabase SDK for login/roles, while the API ingests wearable data through a health service, normalizes samples, and stores them in realtime and aggregated tables. Background schedulers run hourly alert scans and emergency checks, comparing vitals to thresholds and creating alerts or emergency records. Doctors see dashboards and reports, including AI summaries from Gemini, and can jump into Daily.co video calls directly from the workflow.

What We Learned
We learned that in emergencies, every millisecond matters. We had data coming in, and we learned how to make it effective by normalizing it and turning it into concise insights that help doctors act fast. On the backend, we got good at normalizing raw wearable data so background jobs can scan for abnormalities without slowing the app, and we learned doctors need better data, not more data, so we use LLMs to turn messy vitals into quick summaries. On the frontend, we focused on a calm, reliable interface where one click opens the video bridge and live stream. Most of all, we learned how to connect auth, real time databases, and video APIs into one system that actually closes the gap between an incident and professional help.

Challenges
Deployment to Render and Vercel was harder than expected because some APIs were still local and env vars had to be aligned across frontend, backend, and Supabase. We built the whole system in about 20 hours, so we had to make fast tradeoffs and cut scope. A big issue was emergency calling from the patient side. We tried to let patients initiate the video call, but the flow was unreliable and the call often failed to connect. To keep the experience stable, we pivoted to a doctor‑initiated call that responders can reliably pick up. We also had no time to ship a mobile app, so we focused on a responsive web experience instead.

Built With

Share this project:

Updates