Inspiration

We talked to rideshare and truck drivers — immigrants who've been in the US for years — and asked them what happens after an accident. The answer was always the same: panic, confusion, and loss. Not because they didn't have footage. Because they didn't know what to do with it.

One driver told us he had clear dashcam video proving the other driver's fault — and still lost the insurance dispute because he couldn't explain what happened in English. That moment defined ClAImpilot.

What it does

ClAImpilot is an AI-powered dashcam system that activates the moment of impact and guides non-English speaking drivers through the entire post-incident process — in their language.

  • Collision detected via IMU + AI trigger
  • Multilingual voice agent asks the driver structured questions to capture the incident
  • AI-generated accident report built from sensor data, video, and driver input
  • Automated claim filing — insurance submission and platform dispute support via Twilio

No driver needs to know what "deductible" means. ClAImpilot handles it.

How we built it

We built a Hardware-in-the-Loop (HIL) simulator to validate the full pipeline without waiting for a real accident:

  • CARLA (autonomous driving simulator) generates realistic collision events
  • ESP32-S3 + MPU6050 IMU on a rotating platform captures real sensor data
  • Solenoid delivers a physical impact impulse; ESP-NOW wireless eliminates wire tangling
  • Sensor data feeds into a FastAPI backend → YOLOv8 for structured scene understanding → Claude API for report generation → Twilio for voice call delivery

The simulator lets us demo the complete sensor-to-report pipeline with real hardware and real AI — not a mockup.

Challenges we ran into

  • Sim-to-real gap: CARLA's synthetic imagery is clean; real-world dashcam footage is not. We scoped YOLO to structured event capture rather than real-time retraining.
  • Wireless on a moving platform: Mounting ESP32 on a rotating rig with live IMU transmission required switching to ESP-NOW to avoid wire tangling.
  • Multilingual voice UX: Designing a voice agent that works under stress, in a second language, at the side of a road — the interaction model needed to be extremely simple.
  • Scope discipline: The temptation to build everything (app, hardware, cloud, AI) at once. We held the line: HIL simulator first, full product later.

Accomplishments that we're proud of

  • Built a working end-to-end pipeline — collision trigger to voice report — in a hackathon timeframe
  • Validated customer need through real driver interviews before writing a line of code
  • Confirmed pricing acceptance: drivers will pay $50 hardware + $10/month
  • Identified a competitive gap that no existing player (Nexar, Nauto, Netradyne, Samsara) has filled: post-incident AI agent workflow for individual drivers

What we learned

  • The real pain isn't the accident — it's the 72 hours after
  • Language is only part of the barrier; process unfamiliarity is equally paralyzing
  • Hardware-in-the-loop simulation is a legitimate validation strategy, not just a demo trick
  • Scoping ruthlessly is a feature, not a compromise

What's next for ClAImpilot

  • Month 1–3: Software MVP — multilingual voice agent app, works with any existing dashcam, free beta with 100 seed users; collect structured user surveys in parallel
  • Month 4–9: ClAImpilot hardware bundle launch ($50 device + $10/mo), informed by survey data
  • Month 10+: B2B2C partnerships — Turo, Hertz, insurance companies — with traction data in hand
  • Provisional patent filing on multilingual post-incident voice triage workflow

Built With

Share this project:

Updates