Inspiration

We were inspired by the growing number of road accidents caused by driver drowsiness, distraction, and momentary inattention. A few seconds of closed eyes or looking away from the road can lead to devastating consequences.

Advanced safety technologies exist, but they are often expensive and limited to high-end vehicles. We wanted to design a low-cost, scalable system that could bring intelligent safety features to everyday vehicles.

The goal was simple: Detect danger early. Intervene safely. Reduce harm before impact happens.

What it does

Our system continuously monitors the driver’s face and eyes in real time. If it detects that the driver’s eyes are closed or the face is not visible, it automatically slows the vehicle over five seconds, activates blinking indicator lights to warn other drivers, and sounds an audible alert. This transforms moments of inattention into a controlled safety response, reducing the risk of collisions and preventing serious accidents.

How we built it

We built the system using a combination of computer vision and embedded control. A camera monitors the driver’s face, while a Python script processes over 400 facial landmarks to track eye visibility and attention in real time. The script communicates via serial to an ESP32 microcontroller, which controls a motor driver, LED indicators, and a buzzer. When the driver is inattentive, the ESP32 gradually reduces motor speed, blinks warning LEDs, and triggers the buzzer. The system includes a startup handshake, fail-safe timeout, and smooth deceleration logic to ensure safe and controlled operation.

Challenges we ran into

One major challenge was achieving accurate eye and face tracking in real time, especially distinguishing normal blinking from drowsiness. Another issue was coordinating communication between the Python script and the ESP32 to ensure timely motor and LED responses without delays or false triggers. We also had to implement smooth deceleration in the motor driver so the vehicle slows safely rather than stopping abruptly. Finally, ensuring the system remains off at startup and only activates when the Python program is running required adding a proper handshake and system-armed logic.

Accomplishments that we're proud of

We are proud that we built a fully working prototype that combines real-time computer vision with embedded motor control, successfully detecting driver inattention and responding automatically. Our system not only slows the vehicle safely but also alerts the driver and surrounding traffic, reducing the risk of collisions. By transforming moments of drowsiness or distraction into controlled safety interventions, this solution has the potential to save lives, protect property, and address a global problem that causes thousands of preventable accidents every day.

What we learned

We learned how to integrate computer vision with embedded systems, including real-time facial and eye tracking, serial communication with a microcontroller, and motor control for safety applications. We also gained experience designing reliable fail-safe logic, implementing gradual deceleration, and coordinating multiple outputs like LEDs and buzzers. Beyond the technical skills, we learned the importance of system robustness, precise timing, and clear communication between hardware and software to build solutions that can have real-world safety impact.

What's next for SafeSightAI

Next for SafeSightAI, we plan to improve its accuracy and usability by adding predictive drowsiness detection using blink rate and head pose analysis. We aim to integrate GPS-based alerts, cloud logging, and a mobile dashboard for fleet monitoring. The system could also be adapted for motorcycles, delivery vehicles, and commercial fleets. Ultimately, we want SafeSightAI to be a low-cost, scalable solution that brings intelligent driver safety to everyday vehicles worldwide, helping prevent accidents and save lives on a global scale.

Built With

Share this project:

Updates