Inspiration
Driving at night has always felt a little uncomfortable, the dim roads, sudden shadows, and the fear of not seeing something in time. We’ve both had moments where pedestrians, stray animals, or potholes appeared out of nowhere, and it made us realize how risky low-visibility driving really is. That sparked a question in our minds: Can AI act like an extra pair of eyes when visibility fails? This idea slowly turned into our project- a system that tries to keep drivers safer when the road doesn’t.
What it does
Our project detects people, vehicles, animals, motorbikes, and potholes in real time, even under low-light conditions. Using a webcam feed, the system:
1- Enhances dark frames
2- Runs YOLOv12(s) detection
3- Calculates the distance of each object
4- Gives alert warnings when something is too close
It’s basically a small AI co-driver that warns you before you miss something important on the road.
How we built it
We started by collecting and preparing low-light data using the ExDark dataset, filtering only the classes we needed. Then we annotated everything through Roboflow and exported a YOLOv12-ready dataset. Training was done on our college’s DGX A100 server, accessed through PuTTY and WinSCP (yes, a lot of transferring files back and forth!). We trained YOLOv12(s) for 200 epochs, with augmentations tuned for dark images. For distance estimation, we calibrated our webcam with checkerboard images, extracted intrinsic parameters, and applied the ground-plane formula to get approximate real-world distance. Finally, we built the real-time detection + alert system using: Ultralytics YOLO OpenCV Python Webcam feed
Challenges we ran into
1- Dataset issues We realized quickly that low-light images are messy. Sometimes objects are barely visible, so annotations had to be very precise.
2- Model training struggles Tweaking training configurations on DGX without crashing anything felt like defusing a bomb sometimes.
3- Distance estimation Getting accurate distance from just one camera was harder than we expected. A small angle mistake could change everything.
4- Low-light noise Noise, glare, blur, the model had to learn to handle all of it.
Accomplishments that we're proud of
1- Training YOLOv12(s) successfully on a DGX server
2- Getting real-time alerts working smoothly
3- Achieving good detection results even with very dark frames
4-Integrating distance estimation into the detection pipeline
5- Actually seeing the alert pop up at the right time — that moment felt amazing
Most importantly, we’re proud that we built something meaningful, something that could genuinely help make roads safer.
What we learned
We ended up learning much more than we expected from understanding how YOLO works behind the scenes to realizing how crucial preprocessing is for low-light images. We got hands-on experience with proper dataset annotation, camera calibration, and building an efficient real-time system. But above all the technical lessons, the biggest learning was teamwork: dividing tasks, coordinating smoothly, and keeping each other motivated throughout the process.
What's next for AI Based Driving Assistance for Low Visibility Path
1- Expand dataset and add more classes like traffic lights, road signs, and animals
2- Deploy on Raspberry Pi or an embedded device for portability
3- Integrate IR or thermal sensors for extreme darkness
4- Build a complete dashboard with logs, alerts, and visual analytics
Log in or sign up for Devpost to join the conversation.