Inspiration

WatchDawg started from a simple observation: falls are high-risk, high-consequence events for older adults, but most monitoring systems still depend on wearables that people forget, stop charging, or choose not to wear. We wanted to build something that worked in the background, without changing a person’s routine.

Our goal was to create a system that could notice a fall, physically respond in the environment, and notify the right people quickly — without requiring a wearable or constant cloud monitoring.

What it does

WatchDawg runs on a Unitree Go2 EDU, powered by an NVIDIA Jetson Orin NX, and uses an Intel RealSense D435 RGB-D camera for real-time perception.

The robot continuously monitors for signs of a fall using on-device vision. When a likely fall is confirmed, the system actively responds:

  • Detects surrounding objects and obstacles in the environment
  • Uses depth data from the RealSense D435 to estimate the distance to the person
  • Calculates a safe and controlled stopping distance
  • Autonomously walks the desired distance while avoiding obstacles in real time
  • Sends SMS notifications to caregivers and emergency contacts with clear incident context and timing

All core incident logic runs locally on the Jetson Orin NX for privacy. The system does not depend on a wearable and does not need to stream raw personal data to the cloud to function.

How we built it

We built WatchDawg as an event-driven robotics pipeline using Python and ROS2, with OpenCV handling the live vision path and a custom state machine controlling incident flow and robot behavior.

The Intel RealSense D435 provides synchronized color and depth streams:

  • The RGB stream feeds the fall detection pipeline
  • The depth stream enables real-world distance estimation
  • Depth data provides 3D spatial awareness for obstacle detection
  • The robot uses this data to compute a controlled approach distance

Once a fall is confirmed, the state machine transitions into response mode. The system calculates the person’s distance using depth measurements, generates motion commands, and sends navigation instructions through ROS2 to the Unitree Go2.

During movement, obstacle information is continuously evaluated so the robot can dynamically adjust its path and avoid collisions.

For communication, we use Twilio to send outbound SMS notifications. We also built deployment scripts and configuration tooling to quickly iterate between our development laptops and the robot hardware. The modular system architecture allows us to improve perception, navigation, or alerting independently without rewriting the entire pipeline.

Challenges we ran into

The biggest challenges were reliability and integration under real-world conditions.

Perception was only one part of the problem. Coordinating fall detection, depth-based distance estimation, obstacle detection, and motion control in real time required careful synchronization.

We encountered:

  • Timing issues in live robot control
  • Network instability when delivering alerts
  • Hardware-level inconsistencies in alarm playback despite successful command logs

We also had to design fallback alert paths when the robot did not have direct internet access.

This project reinforced that a successful return code does not always mean real-world success. Verification and observability became critical parts of the system.

Accomplishments that we're proud of

We successfully built and deployed an end-to-end autonomous system on real hardware.

WatchDawg can:

  • Detect a fall using onboard RGB vision
  • Estimate the person’s distance using RealSense depth data
  • Detect and avoid obstacles during movement
  • Walk a calculated distance and stop at a controlled interaction range
  • Send real SMS caregiver notifications in a single automated flow

We are especially proud that this works outside of simulation. The system integrates perception, navigation, and communication into a cohesive pipeline that runs fully on the robot.

We also built deployment scripts, runtime diagnostics, and logging systems to make the project reproducible and demo-ready.

What we learned

We learned that assistive robotics is far more about systems engineering discipline than any single model or API.

A demo can fail quickly when perception, depth sensing, motion control, networking, and service dependencies are not tightly coordinated. We learned to treat observability as a core feature rather than a debugging afterthought.

We became better at designing with failure in mind by adding verification checks, fallback logic, and explicit runtime validation instead of assuming everything works perfectly.

We also learned to distinguish between “works once” and “works reliably,” which fundamentally changed how we tested and structured the system.

What’s next for WatchDawg

The next version of WatchDawg will focus on:

  • Improving fall detection robustness and reducing false positives
  • Strengthening autonomous approach behavior in cluttered home layouts
  • Refining depth-based localization and obstacle-aware navigation
  • Expanding caregiver workflows with richer incident summaries and escalation logic
  • Supporting multi-device deployment for senior living environments

Built With

Share this project:

Updates