R.A.C.E. Real-time Analysis and Control Engine

Inspiration

Race engineers in Formula 1 balance hundreds of live data streams - speed, tire temps, fuel levels - while also interpreting cryptic team radio messages. We wanted to create an AI co-pilot that listens to the same radio, learns each competitor's habits, and validates strategies.

The idea: When a driver says “full speed ahead,” the system should know exactly what that means in telemetry. Radio engineer already has access to data for the car, we wanted to go one step further and provide strategic aid.

What It Does

F1 Race Engineer AI creates an AI agent for each driver that learns normal behavior over a few laps, then detects deviations and matches them with radio phrases to decrypt strategy calls. It also uses this observed behaviors to come up with ideas in terms of both offensive and defensive strategies.

  • Streams live telemetry and radio transcripts through Kafka topics.
  • Uses Spark Streaming for real-time anomaly detection, and strategy analysis.
  • Runs ElevenLabs Speech-to-Text + Emotion Embeddings to transcribe and analyze tone.
  • Summarizes every few laps via Gemini API.

Displays everything in a React + Tailwind galaxy-themed dashboard hosted on Digital Ocean Kubernetes.

How We Built It

Frontend: Next.JS + Tailwind CSS, Real-time celestial dashboard Backend: FastAPI + Redis + Kafka (MSK)
Cloud: AWS (S3 / Glue / Athena / EMR / IAM), Digital Ocean (VPC / Spaces / Kubernetes / Load Balancer / Firewall) Infrastructure: Terraform, AWS AI: Gemini + ElevenLabs

Challenges We Ran Into

  • Synchronizing audio timestamps with observed telemetry.
  • Balancing two clouds (AWS + Digital Ocean) with Terraform deployment.
  • Ensuring consistent emotion embeddings amid noisy radio feeds.
  • Implementing big data analytics

Accomplishments We’re Proud Of

  • Built a full race simulator streaming telemetry + radio data.
  • Designed a real-time dashboard visualizing driver anomalies and “codeword” triggers.
  • Implemented cross-cloud Spark pipelines from Kafka → EMR → S3 → Gemini.
  • Implementing big data analytics, a buzz word we were interested in

What We Learned

  • Clean timestamp design is the backbone of streaming ML.
  • LLMs (Gemini) can explain raw numerics in human language.
  • ElevenLabs embeddings open new ways to interpret how drivers speak, not just what they say.

🔮 What’s Next

  • Deploy LSTM autoencoders for more adaptive anomaly detection
  • Integrate Formula Student telemetry APIs to validate with real race data
  • Scale to 37+ containerized agents: 18 driver agents, 18 engineer agents, and 1 coordinator showcasing NMC²’s distributed compute expertise for teams like Williams

Built With

Share this project:

Updates