Inspiration
Angelware was inspired by a simple problem: in emergencies, responders and care teams often lose time because they cannot see movement, posture, and patient status quickly enough. We wanted a system that could combine privacy-aware sensing, live situational awareness, and instant voice/phone alerts so teams can act faster in critical moments.
What it does
Angelware is a real-time command-and-response platform that combines:
- A live operations dashboard for floor-level situational awareness
- Motion sensing from ESP32 CSI telemetry to detect activity levels
- DensePose-based person visualization ran on a heavy RunPod GPU streaming sensor data and presence visualization over WebSockets for fast remote inference
- Built an ESP32 mesh system for firefighters to detect nearby teammates and confirm presence in real time.
- Voice-agent responses using ElevenLabs for spoken updates
- Twilio-based outbound calling for patient/alert notifications
- In short, it helps teams monitor movement, identify potential incidents, and trigger communication workflows immediately.
How we built it
We built Angelware as a multi-part Python + embedded system:
- Frontend: Pygame command center UI optimized for 800x480 deployment (Raspberry Pi display), including live paneling and floor overlays
- Backend sensing: ESP32 firmware plus Python serial parsers/processors for CSI motion scoring and event logging
- RunPod on demand GPUs for running pre-trained WIFI-CSI sensing models.
- Voice layer: ElevenLabs conversational simulation + text-to-speech playback for nurse/assistant style responses
- Alerting layer: Twilio call script for automated spoken patient-tag alerts
- Dev setup: shared virtual environment, environment-variable driven configuration, and setup scripts for reproducible local runs
Challenges we ran into
- Balancing real-time throughput across camera capture, network streaming, inference, and UI rendering
- Preventing serial/UART saturation from high-frequency CSI logs while preserving useful motion telemetry
- Making DensePose responsive enough for live demo conditions (frame size, mode tradeoffs)
- Cross-platform setup friction (audio playback dependencies, environment setup on Windows vs Linux/Pi)
- Keeping outputs useful when no person is detected and avoiding noisy/unstable behavior in edge conditions
Accomplishments that we're proud of
- End-to-end pipeline working across sensing, visualization, and communication
- Real-time DensePose stream server with practical performance optimizations for live use
- Functional command center UI running on constrained display hardware
- Working voice-agent and phone-alert integrations tied to operational events
- A modular architecture that lets us swap components (mock/live services, sensor inputs, render modes)
What we learned
- Real-time systems succeed or fail on interfaces between components, not just model accuracy
- Compression, frame sizing, and transport choices can matter more than raw model speed
- Embedded telemetry requires careful logging discipline to avoid starving critical tasks
- Reliability and operator clarity are just as important as technical novelty in emergency workflows
- Designing for fallback paths (mock services, dry runs, graceful errors) dramatically improves demo and deployment resilience
What's next for Angelware
- Fuse CSI-derived motion and pose streams into one confidence-scored event engine
- Add stronger intelligence (priority queues, escalation policies, false-positive suppression)
- Expand from single alerts to multi-channel workflows (voice, SMS, dashboard acknowledgments)
- Improve on-device performance and offline-first behavior for degraded network environments
- Run pilot tests with responders/care teams and iterate on UX, alert wording, and intervention timing
Log in or sign up for Devpost to join the conversation.