Inspiration

Drowsy driving causes thousands of preventable accidents every year, yet most safety systems react only after a crash. We wanted to create an AI-driven solution that detects fatigue instantly and simulates exactly how a real vehicle should respond to protect the driver, passengers, and everyone on the road.

What it does

Safe-Drive AI uses real-time face-mesh tracking to detect eye closure and signs of drowsiness, then autonomously triggers a realistic vehicle-response simulation. As the driver becomes unresponsive, the system smoothly slows the car, activates hazard lights, sounds the horn, and, after prolonged drowsiness, initiates a simulated emergency call with location details.

How we built it

We combined MediaPipe face-mesh for eye tracking, a state machine to compute drowsiness levels, and a client-side full-stack architecture using React and Web Workers. The driving simulation was built with animated canvas rendering, smooth speed-based scrolling, hazard-light effects, and integrated audio for the horn and alerts.

Challenges we ran into

Our biggest challenge was creating a natural, realistic emergency stop instead of an instant freeze. Synchronizing speed decay, horn audio, hazard blinking, and road animation required careful timing. Ensuring low-latency computer-vision processing inside the browser without performance drops was another major hurdle.

Accomplishments that we're proud of

We achieved a fluid, polished simulation that feels like a real driver-assist system, complete with smooth deceleration, believable road motion, accurate eye-tracking, and an emergency-response sequence that looks and behaves like an actual vehicle safety protocol.

What we learned

We learned how to integrate computer vision, real-time animations, and state-based logic into a single seamless experience. We also gained experience in browser-level optimization, human-safety UX, and designing systems where timing and responsiveness matter as much as accuracy.

What's next for Safe-Drive AI

We're planning to expand it with lane-departure detection, yawning and gaze-tracking models, night-mode performance, voice alerts, and eventually integrate the system into an actual microcontroller-based car setup for real-world testing.

Built With

Share this project:

Updates