Inspiration

Ocean cleanup is one of the most urgent environmental challenges of our time. But as we researched existing solutions, we noticed a critical flaw: most cleanup robots collect indiscriminately, scooping up marine life along with the trash. We wanted to build something smarter. Something that actually understands what it is looking at before it acts.

What it does

Ocean Guardian is an AI-powered autonomous cleanup system that uses real-time computer vision to distinguish between ocean trash and marine life. When the system detects a fish or sea creature, it immediately halts collection. When the area is clear, it collects. It also runs a live 2D simulation showing the robot navigating the ocean, avoiding obstacles, collecting debris, and returning to a docking station when its inventory is full.

How we built it

We built the system in Python using YOLOv8 for object detection and OpenCV for the camera feed. We trained a custom AI model on real fish species datasets and ocean debris datasets, then built a decision engine that evaluates each frame and determines the safest robot action. The simulation was built with Pygame and runs alongside the live camera feed. The entire system is modular, so the AI model can be swapped or upgraded without changing any other code.

Challenges we ran into

Training the AI model was the biggest challenge. We ran into issues with dataset formats, mismatched class labels, and training sessions getting interrupted. Getting the detection confidence tuned correctly so the system was sensitive enough to catch fish without triggering false positives on random objects took significant iteration. We also had to solve a flickering problem where marine life alerts would flash on and off rapidly, which we fixed by adding a cooldown timer to keep alerts stable.

Accomplishments that we're proud of

We are proud that the AI model actually works in real time on a standard laptop with no specialized hardware. The proximity safety logic, which blocks trash collection when marine life is detected nearby, works reliably. The simulation visually communicates every decision the robot makes, which makes the whole system easy to understand and demo.

What we learned

We learned how to train a custom YOLOv8 model from scratch, how to prepare and merge multiple datasets into a single training pipeline, and how to build a multi-threaded system where computer vision and a live simulation run simultaneously. We also learned that the gap between a working model and a reliable demo is significant, and that tuning thresholds matters as much as the model itself.

What's next for Ocean Guardian: AI-Powered Autonomous Marine Cleanup

The next step is deploying onto physical hardware, starting with a Raspberry Pi or Jetson Nano mounted on a small boat chassis. We also plan to integrate GPS logging so every collection event is mapped, building a real-time pollution heatmap. On the AI side, we want to train on a larger underwater dataset to improve accuracy in real ocean conditions.

Built With

Share this project:

Updates