Inspiration

Firefighters, cops and paramedics are incredible at their jobs, but they're still drowning in coordination overhead that has nothing to do with saving lives. Dispatchers manually transcribe 911 calls while simultaneously coordinating units. Incident commanders piece together situational awareness from fragmented radio traffic across multiple channels. Critical information gets lost, delayed, or misheard. We kept asking: what if AI just quietly sat inside that existing workflow from listening to radio traffic, building a live picture of the scene, flagging conflicts, to tracking patient counts without making responders learn anything new or change how they operate? CCAD AI started from that question. Not "how do we replace the radio" but "how do we make every transmission smarter the moment it's spoken."

What it does

CCAD (Community CAD Software) AI plugs directly into how emergency teams already communicate. Responders talk over four tactical channels Command, Triage, Logistics, and Comms exactly like they do on radio, except every transmission is instantly transcribed, structured, and understood by AI.

While responders are focused on the scene, the system is quietly working in the background. It rewrites a live situation summary after every transmission. It watches for moments where two channels contradict each other, like one team reporting a building cleared while another finds patients inside. It tracks patient counts, listens for emergency scenarios that are prioritized, and suggests tactical map zones based purely on what responders are saying out loud.

Dispatchers get a clean intake screen that turns a spoken 911 call into a structured incident record automatically. And the public can submit reports, photos, and location data that operators review and decide whether to act on.

Nothing about how responders communicate changes. The AI just makes every word count more.

How we built it

We kept the architecture deliberately simple — two processes, one origin, no cloud infrastructure needed.

  • Backend

FastAPI + SQLite handling all REST endpoints, WebSockets, and file storage Google Gemini running five parallel AI tasks after every transmission: situation summary, conflict detection, map zone suggestions, channel analysis, and audio transcription ElevenLabs Scribe as the primary speech-to-text engine, falling back to Gemini's multimodal audio when unavailable

  • Frontend

Next.js 16 with App Router, React 19, TypeScript, and Tailwind 4 Single origin setup where Next.js proxies all traffic to FastAPI, so mobile devices accept one certificate and everything works Push-to-talk built with MediaRecorder + SpeechRecognition running in parallel for live interim transcripts

  • Real-time layer

WebSocket broadcasts push every AI result instantly to all connected clients Each AI output is a separate event type so components only update what they care about

  • Public portal

Separate read-only incident board exposing zero operational data Community reporting with photo/video upload feeding a moderated operator review queue

Challenges we ran into

Honestly, the hardest parts were the ones we underestimated.

Getting WebSockets to reliably carry live voice transmissions across multiple connected clients took far more iteration than expected. Keeping every device in sync without dropped events or ghost connections required careful state management on both ends.

The ElevenLabs Scribe API integration was trickier than the docs suggested. Getting the audio format, mime types, and request structure right took real trial and error before transcriptions came back clean and consistent.

Latency was a constant battle. Every transmission touches audio capture, transcription, AI analysis, database writes, and a WebSocket broadcast. Getting that entire chain to feel fast enough that responders actually trust it meant optimizing each step without cutting corners on accuracy.

Accomplishments that we're proud of

Watching the full pipeline work in real time for the first time was genuinely exciting. A responder speaks, releases the button, and within two seconds the transcript appears on the operator dashboard, the AI summary rewrites itself, and every connected device updates simultaneously. That felt real.

The priority detection on the Command channel was a proud moment. One word spoken anywhere in a transmission instantly triggers a critical alert across every connected client.

Getting the public reporting portal to feel genuinely useful rather than just a checkbox feature took real thought. Turning bystanders into a structured information source for incident commanders, without exposing any operational data, struck exactly the balance we were aiming for.

What we learned

Building something that operates in high stakes environments teaches you things that normal projects never do.

On the technical side:

  • Stale closures in React event handlers are not just a textbook problem. In a real time app where milliseconds matter, they cause real bugs that look completely fine in the code.

  • Pointer capture on touch devices is non negotiable for any hold to press interaction. Without it, mobile touch events are unreliable in ways that are nearly impossible to reproduce on desktop.

  • WebSocket event architecture works best when each AI result is its own event type rather than one big update. Components only react to what they care about and everything stays clean.

  • A single origin setup does not just simplify development. It fundamentally changes whether real users on real devices will actually complete the setup.

On the product side:

  • Responders do not want new tools. They want their existing workflow to become smarter without them noticing.

  • An AI that listens to every transmission and quietly prioritizes what matters, surfaces the critical information, and filters the noise is far more valuable than any dashboard feature we could have built.

  • Latency is trust. If the system feels slow, responders stop believing it.

What's next for CCAD

The foundation is solid. Now we want to make it battle ready.

Offline resilience is the most urgent next step. Emergencies happen where connectivity is worst, and a system that fails when the cell tower is overwhelmed is not a system responders can trust.

Native mobile apps for iOS and Android would remove the certificate friction entirely and enable push notifications so dispatched units get their assignment even when the app is closed.

CAD system integration with existing dispatch platforms like Tyler New World and Motorola PremierOne would let CCAD AI sit alongside current infrastructure rather than asking agencies to replace anything.

Longer term, we want the AI to get predictive. Not just summarizing what is happening, but anticipating what comes next.

Built With

Share this project:

Updates