Inspiration
Drowsy and distracted driving causes thousands of crashes every year, yet most cars monitor only the road and not the driver. I wanted a low-cost, privacy preserving way to catch microsleeps and distraction using hardware everyone already has.
What it does
Safe Drive analyzes a live cabin view (phone or dashcam) to detect eye closure, blink rate, and head pose in real time. When it sees sustained “eyes closed” or gaze off the road, it triggers visual/audio alerts. All processing happens on-device and no video leaves the car. A clear overlay shows detections and confidence so the system is explainable. YOLO backend API for image handling and communication YOLOv5 to configure driver alert detection and use video clipping to handle object detection, labeling, alert monitoring signals!
How we built it
- Frontend: React + TypeScript + Vite, Tailwind for UI components.
- Vision pipeline: MediaPipe FaceMesh landmarks → Eye Aspect Ratio (EAR) + temporal smoothing to classify open/closed eyes; simple head-pose heuristics for distraction.
- Video I/O: HTML
<video>from webcam (getUserMedia) or uploaded clip; canvas overlays for HUD and boxes. - Alerts: Rule engine (threshold + hold time) driving on-screen and audio cues.
- Architecture: Modular detectors + shared state bus so new models (e.g., YOLO/iris/pose) drop in without UI rewrites.
Challenges we ran into
- Real-world variance: Glasses, low light, and head tilt broke naive thresholds → added EMA smoothing, per-user calibration, and hold times to reduce false alarms.
- Browser quirks: Autoplay/permissions on iOS, local file CORS, and 1080p performance tuning.
- Tooling detours: TypeScript JSX flags, Vite config, dependency/type conflicts (React/Three/MediaPipe).
- UX balance: Alerts must be noticeable without being annoying; iterated on color, copy, and timing.
Accomplishments that we're proud of
- Fully on-device drowsiness detection with sub-100 ms latency on a laptop webcam.
- Transparent overlay (EAR, confidence, status) to build user trust.
- Clean component architecture enabling quick swap from recording → live and rapid detector additions.
- Significant false-positive reduction after adding smoothing and hold-time logic.
What we learned
- Lightweight features (EAR) + solid temporal logic can beat heavyweight models for real-time tasks.
- Human factors matter: alert timing and copy impact compliance more than raw accuracy.
- Browser media pipelines are powerful but picky encoding, permissions, and device quirks need defensive code. -In our project links you will be Abel to go to the front end via GitHub.
- Strong typing and modular design make rapid pivots feasible under hackathon pressure.
What's next for Safe Drive
- Robustness: Per-user calibration, ambient-light compensation, sunglasses handling.
- Signals: Yaw/pitch roll off, yawn detection, phone-in-hand distraction.
- Models: Lightweight on-device YOLO/eye-state via WebGPU (still privacy-first).
- Mobile: Package with Capacitor for iOS/Android; background audio alerts.
- Fleet/insurer: Dashboard for risk trends, coaching moments, and telematics/ELD APIs.
- Safety polish: Configurable alert profiles, night mode, and fail-safe behavior.
Built With
- figma
- postgresql
- typescript
- yolo
Log in or sign up for Devpost to join the conversation.