Inspiration

We started this project because collisions between vehicles and animals have become a growing concern worldwide. Our goal was to develop a system that could help prevent these accidents by detecting animals on roads and notifying drivers in real time.

What it does

Our system uses a camera placed near the road to capture video of the surrounding area. The footage is processed with computer vision to detect and identify animals near the roadway. When an animal is detected, the system immediately sends a notification through a mobile app to alert nearby drivers of a potential wildlife hazard.

How we built it

Using a low-cost but effective camera, we connected it to a Raspberry Pi to capture and store live video of the roadway. We also connected a button and an LED to the Raspberry Pi, allowing us to easily start and stop recordings while providing a visual indicator of the system’s status. The Raspberry Pi processes the video stream and sends frames to our pre-trained computer vision model for analysis.

To train our system to detect animals, we began with a pre-trained YOLO computer vision model. We then fine-tuned the model using collected video footage of animals such as deer, allowing it to recognize wildlife that may enter roadways.

To maximize the system’s effectiveness, we considered the best placement for the camera setup. We decided to mount it on a streetlight near the road so it could monitor a wider area. To make this possible, we designed and 3D printed a protective enclosure that houses the hardware and allows the device to be securely attached to the streetlight.

To alert drivers about potential wildlife hazards, we developed a mobile application with a map-based interface similar to Google Maps. When the system detects an animal near the roadway, the app places an animal icon on the map to mark the hazard location and sends a real-time notification to nearby drivers so they can proceed with caution.

Challenges we ran into

One of the biggest challenges we faced during this project was our limited budget. Our initial idea was to mount a LiDAR sensor on top of a car to scan the surroundings and measure the distance and shape of objects in the environment. We planned to use a machine learning model to determine whether detected objects were animals based on their shapes. However, LiDAR systems were far beyond our budget. Although a single-point LiDAR was more affordable, we determined that it would likely produce inaccurate data and require complicated setups, especially given the challenges of collecting measurements from a fast-moving vehicle.

We also considered using other sensors such as ultrasonic, thermal, or standard cameras mounted on a vehicle. However, these options presented issues with measurement accuracy or limitations in detecting animals at longer distances. Because of these constraints, we ultimately decided that mounting a stationary camera near the road would be the most practical and reliable approach for detecting animals and alerting drivers.

Another challenge we encountered was integrating the camera with the Raspberry Pi. Since we had limited experience working with Raspberry Pi systems, it took time to configure the hardware and ensure that the video stream could be processed correctly.

Training our computer vision model also presented difficulties. Although we started with a pre-trained model, adapting it to accurately detect animals using our collected data required additional experimentation and adjustments.

Finally, we faced some challenges when designing the enclosure for our hardware. The box needed to withstand prolonged sun exposure while being large enough to house the entire setup. More durable materials, such as PETG available in the ECE makerspace, were difficult to use due to the cost and size constraints of our design. As a result, we had to explore alternative methods, such as laser cutting, to create a suitable enclosure.

What we learned

Throughout this project, we learned a lot about integrating software, hardware, machine learning, and app development into a single working system that contributes to sustainability. Even though we had little prior experience with computer vision and machine learning, we challenged ourselves to learn the necessary tools and techniques.

Built With

  • yolo
Share this project:

Updates