Hover Drone Stabilization with Dual-Camera Vision

Inspiration

Keeping a drone stable indoors is harder than it looks. Without GPS and with limited onboard sensing, small disturbances quickly push it off course.

For this hackathon, our challenge was to keep a drone hovering as long as possible inside a 1m × 1m cage. For this, we built a computer vision system using two external cameras to track the drone in 3D space.

Our goal was to build a feedback loop that keeps the drone centered and stable in midair.


What It Does

Our system tracks the drone’s position in real time and estimates its 3D coordinates inside the cage.

We detect the red and green LEDs on the drone, filtering out other light sources that do not match the expected color and brightness. With two cameras placed roughly 90° apart, we triangulate the drone's position:

  • Camera 1 tracks X and Z
  • Camera 2 tracks Y and Z

From these detections we compute a normalized coordinate (x, y, z) within the 1-meter cube. This position can feed into a control loop that calls movement functions through the drone’s WebSocket API, nudging it back toward the center when it drifts.

The target hover point is:

(0.5, 0.5, 0.5)

the center of the cage.


How We Built It

Dual-Camera Vision System

We use OpenCV to process frames from two cameras simultaneously.

Each camera is calibrated by selecting the bounding box of the cage, ensuring all measurements use the same physical reference frame.

Every frame then goes through this pipeline:

  1. Crop to cage region
  2. Detect LED colors
  3. Identify drone center
  4. Combine camera detections
  5. Estimate 3D position

LED-Based Drone Detection

Instead of detecting the drone body, which is difficult with motion blur, we track the LEDs mounted on the drone.

The system:

  • Uses HSV color thresholding to detect red and green LEDs
  • Cleans masks with morphological filtering
  • Finds contours matching expected LED sizes
  • Computes the center of detected LEDs

If both LEDs are detected we use their midpoint. If only one is visible we fall back to that position.


3D Position Estimation

We convert the detected positions into normalized cage coordinates:

  • X from Camera 1 horizontal position
  • Y from Camera 2 horizontal position
  • Z from the vertical position seen by both cameras

We average the vertical estimate from both cameras to reduce noise.

The result is a real-time estimate of the drone’s position within the cage.


Debug Visualization

To speed up development we built several visualization tools:

  • Cage calibration bounding boxes
  • LED detection overlays
  • ROI debug views
  • A top-down cage visualization showing the drone’s estimated position and height

These tools helped tune detection thresholds and verify coordinate mapping.


Integrating Drone Control

The vision system continuously outputs the drone’s (x, y, z) coordinates.

A control script can then:

  1. Compare the drone’s position to the target center
  2. Compute the error along each axis
  3. Send stabilization commands through the WebSocket API

Because the hackathon required using predefined control functions, the control layer simply calls these functions based on positional error.

A PID controller could be added later to smooth corrections and improve stability.


Challenges We Ran Into

Learning new tools quickly

This was our first time using several libraries involved, especially OpenCV for real-time camera processing. We got up to speed by asking other teams for help, reading documentation, and using AI assistants to prototype and understand unfamiliar APIs.

Incorrect propeller configuration

During early tests the drone tilted instead of lifting. With help from the organizers we realized the propellers were mounted incorrectly and fixed the clockwise vs counter-clockwise configuration.

Gyroscope safety mode

If the drone tilted too much, its gyroscope entered an error state and disabled normal operation. Through repeated testing we noticed patterns in the drone’s status LEDs and learned the drone needed to be fully restarted when this happened.

Startup gyroscope alignment

The drone’s gyroscope calibration was not always perfectly aligned at startup, causing drifting or spinning during takeoff. Teams at the hackathon shared ideas to improve the startup process.

Reliable LED detection

Tracking LEDs was challenging because they sometimes disappeared when the drone tilted, and other bright objects triggered false detections.

We improved reliability by:

  • Testing directly with the drone
  • Adding clearer debug visualizations
  • Iterating on detection thresholds
  • Falling back to a single LED when necessary
  • Averaging vertical estimates from both cameras

Camera compatibility issues

Our camera visualization worked on macOS but not on Windows due to differences in camera drivers. We solved this by connecting both cameras to a Mac laptop using a dongle.

Camera perspective alignment

Because the cameras sit at different angles, converting pixel positions into consistent 3D coordinates required careful normalization relative to the cage bounds. Manual ROI selection aligned both camera views.

Last-minute import error

Right before a final test we encountered a Python import issue that prevented the full system from running. Although we could not resolve it in time, most components had already been validated independently.


Accomplishments We're Proud Of

  • Building a working dual-camera tracking system that estimates a drone’s 3D position in real time
  • Designing a robust LED detection pipeline that worked under changing lighting conditions
  • Creating visual debugging tools that helped us tune detection and coordinate mapping quickly
  • Integrating the vision system with the drone’s WebSocket control interface

What We Learned

At first we thought stabilizing a drone would mostly involve connecting reliable libraries and applying known techniques. In practice we learned that real systems are much messier than textbook problems.

Small factors made a big difference. Motors behaved inconsistently, LEDs had different brightness levels, cameras were not perfectly aligned, and the LEDs were not always visible depending on the drone’s angle. All of these variables made the system harder to build and debug.

We also gained valuable experience with computer vision and real-time video processing, especially using OpenCV to detect and track objects. By the end of the hackathon we had a much better understanding of what it takes to build a system that connects vision, control, and a physical drone.


What’s Next

If we continued developing this project, we would add:

  • Automatic camera calibration
  • A full PID stabilization loop
  • Kalman filtering for smoother tracking
  • Multi-drone support
  • Higher frame rate cameras

This approach could eventually become a low-cost indoor drone positioning system for robotics research, drone racing, or autonomous flight experiments.

Share this project:

Updates