Formula 1 is a sport dictated by split-second decisions and massive data pipelines. Drivers process raw data at 200 mph, relying heavily on human race engineers on the pit wall to make strategy calls over the radio. But human reaction time creates latency, and milliseconds cost championships. We wanted to automate the McLaren pit wall—building an AI Copilot that instantly analyzes telemetry and feeds predictive, actionable micro-adjustments directly into the cockpit.

What it does RunnerF Copilot acts as a fully automated Virtual Race Engineer, processing massive telemetry pipelines to deliver real-time strategy and actionable driver feedback. It is powered by a comprehensive suite of systems:

Live AI Comms: Ingests live lap data to compare against historical optimal lines, issuing real-time micro-adjustments directly to the driver like "carry 5 km/h more speed into Turn 4."

Automated Radio Generator: Translates raw numerical pace deltas into contextual driver statuses to instantly trigger the correct audio prompts without latency.

Dynamic Tire & Fuel Modeling: Calculates real-time tire degradation slopes and adjusts expected lap times based on the exact weight of fuel burned.

Live Interval Tracking: Monitors the gap to surrounding cars lap-by-lap, automatically flagging when the driver enters dirty air or the DRS window.

Predictive Pit Strategy (PitAdvisor): Uses a Q-learning reinforcement model to predict the success rate of an undercut or overcut up to five laps in advance.

Track Evolution Analysis: Analyzes specific sector times throughout the session to adjust pace expectations dynamically as rubber gets laid down on the asphalt.

Explainable AI (XAI) Dashboard: Displays the underlying decision tree paths and confidence scores so the team can verify and trust every AI strategy call.

The Post-Race Debrief: Generates a bespoke comparative summary after the checkered flag, pinpointing exact braking or throttle inefficiencies to show drivers exactly where they lost or gained time.

How we built it: We engineered the backend using Python to handle the rapid data ingestion and analysis. To generate the live comms, we process arrays of spatial and velocity data. The system constantly compares the driver's current speed and position on the track against a historical optimal racing line. By continuously monitoring the variance in velocity (determining if they are going too fast or too slow) and positional deviation from the apex (calculating if they missed the ideal racing line), the script dynamically checks these differences against our set thresholds. When a driver strays too far from the optimal path or speed, the system instantly fires off the correct steering and throttle adjustments. We paired this analytical backend logic with a custom OpenCV Heads-Up Display to visualize the telemetry and AI comms in real-time.

**Challenges we ran into: **Processing rapid streams of simulated telemetry data without creating a bottleneck was our biggest hurdle. We initially struggled with latency; if an AI comm prompt tells a driver to brake, but the system lags by even half a second, the instruction is useless. We had to heavily optimize our Python loops and data comparison functions to ensure the calculations fired instantly. Additionally, rendering the live data on our OpenCV dashboard required careful management of frame updates to prevent the UI from freezing.

Accomplishments that we're proud of: We successfully closed the loop between raw data and actionable driver feedback. We are incredibly proud of the Post-Race Debrief engine—taking massive spreadsheets of lap times and translating them into plain-English, strategic summaries that a real F1 driver could actually use to improve their next session.

What we learned: We learned that in high-speed data environments, the delivery of the data is just as important as the calculation. Creating the logic for the live comms taught us how to filter out "noise" so we only alert the driver when a micro-adjustment actually matters. We also deepened our understanding of applying continuous kinematic state updates to virtual objects.

What's next for Runner Copilot: Our next major step is integrating the OpenF1 API to ingest historical, real-world telemetry data from past Grand Prix events. We also want to connect the Copilot to actual sim-racing hardware (like an Assetto Corsa or F1 24 rig) via UDP telemetry, allowing real sim-racers to use our AI engineer to lower their lap times live on the track.

Built With

Share this project:

Updates