Peace Drone
Inspiration
Autonomous flight starts with one deceptively hard problem: hovering. Before a drone can inspect infrastructure, assist in emergencies, or navigate on its own, it has to stay stable in place.
We were inspired by the AeroHacks challenge to build that foundation ourselves using real hardware, real cameras, and real control constraints. What made the project exciting was that it was not a simulation problem - every small error in vision, localization, or communication showed up immediately in flight behavior. That made Peace Drone feel like a real robotics system from day one.
We also liked the bigger idea behind it: if a drone can reliably understand where it is and hold position safely, that becomes the first step toward more useful autonomous missions.
What it does
Peace Drone is a vision-based autonomous hover system for an ESP32 drone inside a flight cage.
Our system uses two USB cameras to observe the drone, detect its LED markers, estimate its 3D position, and send live correction commands over Wi-Fi. The Python stack handles stereo camera calibration, color-based LED tracking, 3D localization, PID-based hover control, and safety logic such as diagnostics, dry-run mode, and emergency stop.
In practice, we built and tested the full end-to-end pipeline and achieved partial live flight tests. While we did not reach a competition-perfect sustained hover, we were able to validate the core architecture needed for vision-guided stabilization.
How we built it
We built Peace Drone as a full pipeline, with each stage feeding the next:
- Stereo calibration: We first calibrated two fixed cameras using a checkerboard so we could convert image detections into 3D geometry.
- Vision detection: We used OpenCV and HSV thresholding to detect the drone's red, green, and blue LEDs in each camera feed.
- 3D localization: Once the LEDs were found in both views, we triangulated the detections to estimate the drone's position in space and approximate its velocity.
- Outer-loop control: We used PID control to compare the current pose with the target hover position and compute pitch, roll, and thrust corrections.
- Drone communication: We sent those commands from Python to the drone over Wi-Fi while relying on the flashed firmware to handle the fast inner stabilization loop.
- Debugging and safety tooling: We built diagnostics, preview windows, logging, dry-run mode, and stop/disarm controls so we could debug each subsystem separately before attempting live flight.
At a high level, the control loop followed the idea
$$ u(t) = K_p e(t) + K_i \int e(t)\,dt + K_d \frac{de(t)}{dt} $$
where the error term came from the difference between the target hover position and the drone's estimated 3D pose.
Challenges we ran into
Our biggest challenge was calibration. Since the whole system depends on accurate 3D localization, even small stereo calibration errors could turn into unstable pose estimates and bad control outputs.
The second major challenge was latency and noisy sensing. In a real setup, the cameras do not always see the LEDs cleanly, lighting changes affect detection quality, and Wi-Fi communication adds timing constraints. That meant we were constantly balancing responsiveness against stability.
We also learned that robotics failures compound quickly: a small vision glitch becomes a localization jump, which becomes an aggressive control correction, which can ruin a flight attempt. Debugging that chain in real time was much harder than getting each component to work individually.
Accomplishments that we're proud of
We are proud that we built a complete end-to-end autonomy stack rather than just a single computer vision demo.
Some of the pieces we are especially proud of are:
- a stereo vision pipeline for LED tracking and triangulation
- a PID-based hover controller connected to real drone commands
- dry-run and live modes for safer iteration
- diagnostics, overlays, and logs to make the system debuggable
- a working emergency-stop-oriented workflow for real hardware testing
Even though our live results were partial, we successfully turned a hard autonomy challenge into a structured, testable system.
What we learned
We learned that in robotics, system integration matters as much as the algorithm itself.
A few big takeaways stood out:
- good calibration is more important than clever control if your pose estimate is wrong
- observability is everything, and logs/debug views save enormous amounts of time
- real-world latency and noise dominate behavior much more than they do in theory
- safe iteration matters, especially when testing live hardware
- building the tools to inspect failures is often what makes progress possible
Most of all, we learned that stable autonomous flight is not one problem - it is the combination of sensing, geometry, communication, and control all working together under real constraints.
What's next for Peace Drone
Our next steps are focused on making the hover loop more robust and moving toward higher-level autonomy:
- improve stereo calibration and camera placement for more stable 3D estimates
- make LED detection more robust to lighting changes and partial occlusion
- add stronger filtering/state estimation before control decisions
- continue tuning thrust baselines, axis mapping, and PID gains for steadier live hover
- move from hover stabilization toward waypoint following and mission-level behavior
Peace Drone started as a hover challenge, but we see it as the first step toward a safer and more capable autonomous drone platform.

Log in or sign up for Devpost to join the conversation.