Inspiration

Falls are one of the biggest safety risks for older adults—especially in hospice and home care, where patients may be frail, medicated, or alone for periods of time. The CDC reports falls are the leading cause of injury for adults 65+, and roughly 1 in 4 older adults report a fall each year. We built CareWatch to reduce the “time to help” after a fall, without requiring wearables or the person pressing a button.

What it does

CareWatch turns any camera into a real-time fall monitor:

  • Runs on-device pose detection to track posture + motion continuously
  • Detects and confirms falls (to reduce false alarms) and captures picture evidence
  • Streams a live MJPEG feed to a caregiver dashboard
  • Triggers an agentic AI workflow that sends caregivers a structured message describing what happened (with image evidence), so they can respond quickly

How we built it

  • Edge fall detection (Python): OpenCV + MediaPipe Pose extracts keypoints; our fall state machine uses torso angle, vertical drop, and immobility timing to confirm a fall.
  • Streaming: A lightweight MJPEG server shares the live camera feed for the web dashboard.
  • Web app (Next.js): Receives fall events, stores alerts, and updates the dashboard in real time.
  • Agentic AI workflow (OpenClaw): The web app sends a webhook to OpenClaw; OpenClaw fetches the image, uses a vision model to generate a caregiver-ready summary, and delivers it to messaging channels (like Telegram/text).
  • Edge hardware: Designed to run locally on Qualcomm-provided edge devices (Rubik), keeping latency low and reducing dependence on cloud round-trips.

Challenges we ran into

  • Balancing sensitivity vs. false positives (fast detection, but not alert fatigue)
  • Running real-time vision reliably on edge hardware with limited resources
  • Getting a smooth end-to-end pipeline: camera → detection → dashboard → agent message
  • Webhook/auth setup details (tokens, timeouts, making sure alerts never block the detector)
  • Making the agent message “actionable” and safe (clear description + evidence, not noisy instructions)

Accomplishments that we’re proud of

  • A full working pipeline from a live camera all the way to a caregiver notification
  • Real-time monitoring: live stream + alert dashboard + immediate messaging
  • An agentic workflow that turns a raw event into a clear, human-readable caregiver summary
  • Edge-first design that keeps the system responsive and practical for hospice/home settings

What we learned

  • Edge AI is as much about engineering as models: latency, reliability, and failure modes matter
  • A simple, well-tuned state machine can dramatically improve real-world usability
  • Alerting is a product problem: the best detection is useless without clear delivery and context
  • Decoupling components (detector ↔ web app ↔ agent) makes the system easier to iterate on

What’s next for CareWatch

  • Improve fall classification with more scenarios (near-falls, sitting/lying transitions, assistive devices)
  • Add multi-camera support + better patient/location context
  • Stronger privacy controls (retention policies, on-device redaction/blur, audit logs)
  • Integrate escalation logic (no acknowledgement → notify nurse-on-duty → family)
  • Expand beyond falls into broader safety monitoring (wandering risk, prolonged immobility, missed check-ins)

Built With

Share this project:

Updates