Inspiration

Everyone has been in a hospital and felt that gap between needing help and actually getting it. When you press the call button, nobody knows if it’s pain, fear, or something truly life-threatening. Patients can’t always explain what’s happening, and critical moments often look the same as non-critical ones. Everyone has been in a hospital and felt that gap between needing help and actually getting it. The nurses we spoke to feel the same pressure from the other side. They want to help, but they’re overwhelmed, understaffed, and working with tools that can’t separate noise from real emergencies. It’s not a patient problem or a nurse problem. It’s a system problem. ASDA exists to fix that gap so urgency is finally understood on both sides.

What it does

ASDA creates a real-time patient intelligence layer across the hospital. It processes inbound patient calls through a voice-driven agent, extracts intent and clinical signals, and forwards urgent situations to the correct nurse. Patients with special needs and acute medical conditions are monitored 24/7 to prevent unnoticed accidents like falls or episodes. This information is then displayed in a 3D interactive dashboard and turned into emergency alerts. Additionally the incident reports are logged through a SAP Business Process Automation.

The system helps with prioritizing alerts and escalates only validated emergencies to the responsible nurses.

How we built it

The backend is powered by Node, PostgreSQL and Python with OpenCV and Mediapipe. The inbound and outbound calls are handled by Twilio and processed through an ElevenLabs conversational agent that uses Anthropic Claude to extract relevant information from the call.

The frontend uses React, Vite, shadcn/ui, and a full 3D hospital layout built with Three.js and React Three Fiber. Each room updates in real time as alerts, calls, or video events come in.

The 24/7 patient monitoring for detecting accidents is built with Python, OpenCV for handling the video stream and Mediapipe for pose estimation using landmarks and skeletal tracking.

The SAP automation is started via an API call which is triggered by an event from the patient monitoring.

Challenges we ran into

Building all capabilities into a single system was pretty difficult since all of them interact with the dashboard and we had to ensure that they run seamlessly. We had to synchronize Twilio SIP audio, ElevenLabs voice AI, video analysis events, and WebSocket dashboard updates without dropping context.

On the video side, achieving stable pose-estimation and fall-detection required heavy optimization of our Python pipeline, especially when parsing shaky or low-light footage. Additionally handling Python version conflicts on different operating systems was a major challenge.

But the biggest challenge was implementing SAP automations due to the large amount of capabilities and zero experience we had interacting with it.

Accomplishments that we're proud of

The three achievements we’re most proud of:

  1. Real fall detection with motion-capture skeletal tracking We built a fully working pipeline that analyzes videos, extracts human pose landmarks, and detects abnormal movement patterns.

  2. A live 3D hospital floorplan that updates in milliseconds Every room reflects real signals from patients, calls, and video analytics.

  3. Real voice-based patient interaction through Twilio + ElevenLabs Patients can call, speak naturally, and our system routes the right information to the right staff instantly.

What we learned

We learned that agent systems only work well when each agent has a single, clearly defined job. Anything broader creates confusion and unpredictable behavior. We also learned that combining voice input, video analysis, and real-time dashboards requires clean interfaces and strict data flow rules. Even small delays or inconsistencies in one part of the system can break the whole pipeline.

What's next for ASDA

  1. Real-time continuous monitoring Move from video uploads to live, continuous motion analysis with on-device processing.

  2. More clinical agents Medication-risk agent Room-safety agent Night-shift check-in agent

  3. Sensor fusion Combine voice intent, movement patterns, and video signals into one unified risk score.

  4. On-device edge processing Run fall detection and motion analysis locally to stay privacy-safe and lightweight.

  5. Integration with hospital systems Build stable connectors for SAP, ORBIS, and other German clinical systems so alerts and incidents flow directly into existing hospital workflows.

  6. Reinforcement learning from nurse feedback Use real nurse decisions to improve routing accuracy and emergency classification over time.

  7. Federated learning across hospitals Train models across many clinics without sharing raw patient data, keeping privacy intact while improving performance system-wide.

Built with Node, PostgreSQL, Python, SAP, three.js , OpenCV, Mediapipe, Twilio, Elevenlabs, Anthropic Claude, Docker

Built With

Share this project:

Updates