Inspiration

Landing is one of the most difficult skills for student pilots to learn. During training, pilots rely heavily on instructor feedback, but there is rarely a way to objectively review how well a landing approach was flown after the fact. Most general aviation pilots also do not have access to advanced flight analysis tools used in professional aviation.

We wanted to build a tool that could automatically analyze landing approach videos and provide clear feedback on runway alignment. By using computer vision, we realized we could detect the runway in each frame of a landing video and measure how far the aircraft drifts from the centerline. This inspired us to build GlidePath, a system that turns simple cockpit footage into a visual landing performance analysis.

What it does

GlidePath analyzes aircraft landing approach videos and evaluates how well the aircraft tracks the runway centerline.

A user uploads a cockpit or approach video, and the system:

Detects the runway in each frame using a YOLO computer vision model

Computes the runway centerline and compares it to the aircraft’s visual center

Measures lateral drift from the centerline throughout the approach

Generates an annotated output video showing alignment guidance

Produces charts and scores that evaluate alignment and approach stability

Fetches live METAR weather data to show wind conditions during the approach

The result is a visual and data-driven report that helps pilots understand how well they maintained runway alignment during landing.

How we built it

GlidePath is a full-stack computer vision system built with the following components:

Frontend

React dashboard for uploading videos and visualizing results

Recharts for alignment graphs and stability heatmaps

Custom UI components for approach summaries and weather information

Hosted on Vercel

Backend

FastAPI server that handles video uploads and analysis

OpenCV for frame-by-frame video processing

Ultralytics YOLO model for runway detection

Geometry calculations to compute runway centerlines and lateral offset

Python services for scoring alignment and stability

METAR weather data fetched from the NOAA Aviation Weather Center API

Hosted on Render

The backend processes videos frame-by-frame, detects the runway, calculates the offset between the runway center and the image center, and generates an annotated video showing alignment guidance.

Challenges we ran into

One of the biggest challenges was reliable runway detection. Runway videos vary widely depending on lighting, camera angle, aircraft type, and visual obstructions such as propellers or cockpit frames. We had to experiment with models and detection approaches to make runway detection consistent.

Another challenge was designing the geometry pipeline. Once a runway bounding box was detected, we needed a reliable way to estimate the runway centerline and calculate lateral drift from the aircraft’s perspective.

Accomplishments that we're proud of

Training and integrating a runway detection model

Building a full video processing pipeline using OpenCV

What we learned

training and deploying YOLO object detection models

building video processing pipelines with OpenCV

What's next for GlidePath

Real-time runway alignment analysis during flight simulator approaches

Built With

Share this project:

Updates