CAR|TEL is a real‑time AI co‑driver for cars, built to bring race‑team‑grade diagnostics and coaching to anyone with an OBD port and an internet connection. ​ It started from my frustration as a tuner and fabricator: powerful tools exist, but they are scattered across expensive hardware, legacy desktop software, and closed ecosystems that normal drivers and small workshops cannot easily use. ​

As a high‑school dropout who learned software and AI the hard way, I wanted to prove that a single founder with trade skills in carpentry and aluminium joinery could ship a production‑ready live agent that genuinely helps drivers go faster and keep their cars healthier. I also wanted something I would actually use at the track: an assistant that can watch telemetry, spot problems before they become failures, and explain what is happening in plain language, not just error codes. ​

What I built At a high level, CAR|TEL connects three domains: on‑car telemetry, cloud AI, and human‑friendly coaching. ​

A data pipeline that ingests live OBD‑II / CAN signals (RPM, speed, throttle, temperatures, trims, boost, etc.) from a mobile client and normalizes them into time‑aligned events. ​

A backend that stores sessions, computes derived performance metrics (for example, simple power estimates from P ≈ τ ⋅ ω P≈τ⋅ω or braking performance from deceleration profiles), and exposes them via APIs. ​

A Gemini‑powered live agent that can “sit in the passenger seat”, monitor incoming signals, detect patterns or anomalies, and respond conversationally via chat or voice with diagnostics, explanations, and tuning‑oriented suggestions. ​

Under the hood I use modern web tech (TypeScript, React/Next.js style frontend, and a cloud‑hosted backend) wired into Google’s AI stack for live agent orchestration. The agent is designed to be multimodal in the future: today it reasons over structured telemetry and text; later it can also consume images (gauges, dashboards, dyno sheets) and potentially video. ​

How I built it I started by defining a small but realistic telemetry schema: timestamps, PID values, sensor ranges, and a few derived channels for performance analysis. Then I built a minimal mobile‑to‑cloud pipeline that can stream this data in near real time while staying robust to network drops and noisy signals. ​

Next, I wrapped this telemetry with domain prompts and tools so the live agent can:

Pull the last n n seconds of data around an event (for example, a knock event or over‑temp). ​

Run simple rule‑based checks (temperature thresholds, mixture limits, boost deviation). ​

Combine those checks with LLM reasoning to generate explanations and recommended next steps that match the user’s skill level. ​

Finally, I built a simple web UI where a user can:

See live or recorded sessions.

Ask “Why did my car bog here?” or “Is it safe to keep driving?”

Get responses grounded in actual sensor history instead of generic car advice. ​

What I learned Building CAR|TEL forced me to bridge my hands‑on automotive background with formal AI agent design. ​ I learned how important good abstractions are: separating raw telemetry ingestion, domain logic, and the LLM layer made the system easier to reason about and safer to extend. ​

I also learned how sensitive real‑time experiences are to latency and error handling. Even an extra second or two between an on‑track event and the AI’s response can make the difference between feeling like a live co‑driver and feeling like a log viewer. That pushed me to simplify the data footprint, pre‑compute some metrics, and design prompts that stay efficient. ​

On the product side, I learned to scope aggressively. My original idea included AR HUD overlays, full CAN reverse‑engineering, and automated tune file generation; for this hackathon I focused on a compelling vertical slice: “AI co‑driver that understands your car’s live vitals and explains them back to you.” ​

Challenges Signal noise and reliability. Real‑world automotive data is messy: missing samples, cheap Bluetooth dongles, and inconsistent PIDs across cars. I had to design smoothing, fallbacks, and clear “I’m not sure” states so the agent doesn’t hallucinate confidence when the data is bad. ​

Grounding the AI. Getting from a generic LLM to a car‑aware assistant required careful tool design, constrained prompts, and explicit reasoning steps tied to the telemetry schema. ​

Time and scope. As a solo founder, I had to balance building the actual backend with creating a demo experience that judges can understand in a few minutes. ​

Explaining complexity simply. Translating concepts like fuel trims, knock retard, or brake bias into language a non‑engineer understands, without losing technical accuracy, was an ongoing challenge. ​

Despite these challenges, CAR|TEL now runs end‑to‑end: from live car data to an AI co‑driver that can watch over your engine, your lap, and your learning curve. ​ This hackathon version is only the starting point, but it already shows how live agents can make performance driving and diagnostics more accessible to everyone, not just race teams and specialist tuners. ​

Built With

  • for
  • gemini-live-api
  • github
  • google-cloud-firestore-(or-postgresql)
  • google-cloud-run-(or-firebase-hosting/functions-if-that?s-what-you-used)
  • json-over-http
  • node.js
  • obd?ii-/-can-interfaces
  • react-/-next.js
  • rest-/-websocket-apis
  • typescript
  • version
Share this project:

Updates