Inspiration
The physical world is still mostly captured by humans walking around with phones, drones, or expensive LiDAR rigs. Disaster zones, collapsed mines, and planetary surfaces, these are places humans can't or shouldn't go and remain almost entirely undocumented in 3D. What if a small autonomous rover could do this on demand, just by tapping a location on a map?
What it does
A fully autonomous ground rover that navigates to any GPS coordinate the operator pins on a live map. It knows its heading via a 9-axis IMU, avoids obstacles with a two-layer safety system (ultrasonic + computer vision), streams live telemetry to a web dashboard, and captures geo-tagged frames ready for Gaussian Splatting 3D reconstruction.
Why this matters
This is a platform for autonomous mapping of planetary surfaces, sending rovers into collapsed buildings before humans go in, documenting hazardous sites, search and rescue across large terrain, construction progress tracking, archaeology, and reconnaissance in environments humans can't safely operate.
How we built it
The brain is a Rubik Pi 3 (Qualcomm QCS6490) running Ubuntu 24.04. GPS comes from a GY-NEO6MV2 over UART, heading from an MPU-9250 magnetometer over I2C, without it, GPS tells the rover where it is but not which way it's facing. The rover uses standard Haversine and bearing math to compute distance and direction to its target.
Safety runs on a strict two-tier system: an HC-SR04 ultrasonic on a panning servo acts as a hardware emergency brake, and OpenCV edge detection on the USB webcam catches wider obstacles. Either layer can stop the rover; both must be clear before motion. Motors run through an L298N H-bridge controlled via sysfs GPIO. Powered by lipo batteries, direct to motors, through a USB-C PD trigger module to the Pi. A live web dashboard handles map pinning and telemetry. The Gaussian Splatting reconstruction was offloaded to RCAC (Rosen Center for Advanced Computing) for the heavy GPU compute, since the rover's onboard Pi can't handle that workload.
Challenges we ran into
We faced a lot of difficulties, Gaussian Splatting integration didn't make it across the finish line. We got Gaussian Splatting working independently and proved we could reconstruct scenes from captured images, but bridging it to the rover's live capture pipeline was a much bigger systems problem than we estimated. GPS accuracy was a constant battle. The GY-NEO6MV2 is hobbyist-grade with ~2.5m accuracy at best, and much worse near buildings or with weak satellite fix. The rover sometimes thinks it's two meters from where it actually is, causing it to overshoot. Real fixes would mean RTK GPS or visual-inertial odometry, both beyond our timeline. The MPU-9250 got scrambled by the L298N's magnetic field we had to relocate the IMU and even then readings drift. The Rubik Pi is also not a Raspberry Pi, RPi.GPIO does nothing, costing us hours before we switched to sysfs GPIO. Countless hours spent writing code and debugging...
Accomplishments we're proud of
A rover that genuinely drives itself. A two-layer safety system that refuses to drive into walls even when GPS says to. A working Gaussian Splatting pipeline. A clean operator dashboard. All built on a brand-new platform (Rubik Pi 3) with almost zero community documentation, most of what we figured out from scratch.
What we learned
Sensor fusion is the whole game in robotics, no single sensor is reliable enough alone. Hardware always takes longer than you think... And we ran out of time.
What's next
Closing the loop between live capture and Gaussian Splatting reconstruction. RTK GPS for sub-meter accuracy. Indoor SLAM for environments without GPS. Multi-waypoint missions and real obstacle avoidance instead of just stopping. Fleet coordination for parallel mapping. And a hardened build, weatherproofing, better suspension, longer battery so the rover can actually go to the dangerous places we built it for.
Log in or sign up for Devpost to join the conversation.