The Idea

Every year, tens of thousands of lives are lost to improper driving. We wanted to build something that could act as a silent copilot - constantly watching, always ready to intervene before tragedy. Existing solutions were either too expensive or too slow, and we were motivated by the idea that the smartphone we already carry everywhere could be the guardian that keeps us safe.

The Product

Safeguard is a real-time driver safety system that monitors behavior continuously using your phone's camera. It detects:

Drowsiness Alertness Microsleeping Distraction Intoxication Medical emergencies

When a critical event is detected, Safeguard automatically calls the driver's emergency contact with important information including the driver's name, GPS location, and condition. It also clips and uploads a 10-second video of the event to the cloud for first responders and/or family to review.

The Stack

Safeguard is a full-stack, multi-platform system consisting of:

A Native iOS App (Swift) - the primary client embeds a high-performance JavaScript CV processor via WKWebView, running MediaPipe FaceLandmarker at ~15 FPS with 468 facial landmarks. A TemporalSmoother reduces jitter and a StateStabilizer applies hysteresis-based classification to prevent false alarms. Dual-Camera Pipeline - uses AVCaptureMultiCamSession to simultaneously stream the driver's face and the road ahead. Backend (Node.js / Express / MongoDB) - ingests real-time telemetry, manages emergency contacts, and triggers the safety escalation pipeline. Cloudinary - stores clipped critical video clips linked to each safety event. Web Dashboard (TanStack Start, React, Tailwind) - for debugging and monitoring. Cross-platform mobile client - built with React Native / Expo as an alternative interface.

Challenges

The biggest challenges we experienced were:

False positive suppression - getting the system to be sensitive enough to catch real events while not overextending into normal driving behavior required neural warm-up periods and onset delays. Hybrid architecture bridging - passing high-frequency facial telemetry between the WKWebView JS layer and native Swift cleanly, without dropping frames or introducing latency, was tricky. Dual-camera synchronization - running two simultaneous camera feeds on iOS with AVCaptureMultiCamSession while maintaining AI processing performance pushed the device's limits. Real-time video clipping - intelligently triggering, buffering, encoding, and uploading 10-second anomaly clips without interrupting the monitoring pipeline required careful async design.

Accomplishments

A fully working end-to-end safety pipeline: face -> AI -> state/behavior -> alert -> SMS + video, all in under 15 seconds from event onset. Achieving reliable multi-state detection (drowsiness, intoxication, medical) on-device at ~15 FPS with no cloud inference dependency. Building and integrating five distinct layers - native iOS, cross-platform mobile, web dashboard, backend, and computer vision - in a single hackathon weekend.

What's Next

Android support Passenger/child detection: using the road-facing camera for additional situational awareness features. Regulatory partnerships: exploring integration with insurance providers and the government via safety programs.

Share this project:

Updates