IRIS was inspired by a simple question: why does advanced autonomy only exist in million-dollar robots? We wanted to prove that real-time perception, decision-making, and control could run entirely on low-cost, edge hardware without cloud dependence. Through this project, we learned how to design an end-to-end autonomous system—combining computer vision, confidence-based decision logic, and closed-loop motor control—while working within strict compute and latency constraints. We built IRIS using a Raspberry Pi, a camera, and custom software that processes visual data in real time, locks onto verified targets, and navigates smoothly toward them. The biggest challenges were achieving reliable detection under changing lighting conditions, eliminating false positives, and maintaining stable motion without oscillation. Overcoming these constraints reinforced the importance of system-level thinking and hardware-aware AI, ultimately showing that production-grade autonomy can be both accessible and affordable.

Share this project:

Updates