The Problem We Couldn't Ignore
On July 24, 2024, the Park Fire ignited near Chico, California. Within days it had burned over 429,000 acres — becoming one of the largest wildfires in California history. Behind every acre of destruction was a 911 call center struggling to keep up. A single dispatcher can handle one caller at a time. During peak surge, call volume can spike to dozens per minute. The math doesn't work — and people pay for it.
We built Clear Dispatch because we wanted to fix that math.
Inspiration
The Tesla vs. Waymo metaphor crystallized everything for us early on. Tesla Autopilot keeps a human driver in control — AI assists, warns, suggests. Waymo drives itself. Neither is universally better; context determines which you need.
911 dispatch exists in the same spectrum. In Assisted Mode, a human dispatcher handles every call — AI runs silently in the background, classifying severity, finding the nearest unit, preparing a voice briefing. In Surge Mode, when call rate crosses a threshold, the system shifts: AI agents take the wheel autonomously while the dispatcher monitors, approves heavy asset deployments, and can override back to manual at any moment.
That "human always stays in control" constraint wasn't a limitation we worked around — it was the design principle we built everything around.
What We Built
Clear Dispatch is a real-time, multi-agent AI dispatch support system for wildfire surge events. Four specialized agents work in a coordinated pipeline on every incoming call:
- MONITOR — tracks call volume via a 60-second sliding window; triggers mode transitions automatically when surge threshold is crossed
- TRIAGE (Claude Haiku) — classifies each call by severity (
CRITICAL/URGENT/STANDARD), incident type, and vulnerable-caller status - RESOURCE — selects the nearest available unit using Haversine distance calculation and enforces a mandatory Protocol HOLD for heavy assets (air tankers, hazmat, heavy rescue) requiring explicit dispatcher confirmation before dispatch
- RELAY (Claude Haiku + ElevenLabs TTS) — generates a concise dispatcher briefing and synthesizes it as a voice audio clip, delivered in real time
The entire UI updates through a WebSocket event bus — no polling, no page refreshes. Every agent state transition, every call update, every HOLD event flows through a pure reducer in React, giving the dispatcher a live picture of the system at all times.
For demo realism, we built two distinct call intake paths:
- Assisted Mode — Live Transcription: dispatcher selects a pre-recorded scenario; the backend streams sentences 1.5 seconds apart while Claude Haiku extracts structured fields (location, hazards, people affected) every 2 sentences — simulating a real phone call with progressive AI understanding
- Surge Mode — ElevenLabs Voice Agent: a judge or teammate speaks directly into a phone; an autonomous conversational AI agent conducts the 911 intake; the full transcript is sent to Claude Haiku server-side for structured extraction, then fed into the same dispatch pipeline
The data is grounded in reality: real Yolo County vulnerability scores, real Park Fire GeoJSON perimeter, real CAL FIRE resource configurations.
Tech Stack
| Layer | Technology |
|---|---|
| Backend | Python 3.11, FastAPI, uv |
| AI | Claude Haiku (claude-haiku-4-5-20251001) |
| Voice | ElevenLabs TTS + Conversational AI |
| Frontend | React 18, TypeScript, Vite, Tailwind CSS |
| Maps | Leaflet + CartoDB dark tiles |
| Real-time | Native WebSocket (FastAPI ↔ React) |
| State | In-memory (no database — demo-ready on first run) |
Challenges
The hardest problem wasn't AI — it was state. Coordinating four async agents across
WebSocket connections while maintaining a consistent, correct UI state required careful
design. We learned early that a pure reducer (reducer.ts) with a strict default: return
state fallback was non-negotiable — one unhandled message type wiped all React state
silently. That was a painful lesson discovered at 2am.
The HOLD protocol edge cases were brutal. When a heavy asset dispatch times out after 60 seconds without dispatcher confirmation, the unit has to be returned to the available pool, the hold cleaned up, and the pipeline continued — all without corrupting call state. Getting this right under concurrent surge load took more iterations than any other single feature.
ElevenLabs WebRTC on mobile. The SOS phone page — where a judge scans a QR code and
speaks directly to the voice agent — works only on Android + Chrome over HTTPS. iOS Safari
blocks microphone access on plain HTTP. We had to add @vitejs/plugin-basic-ssl and
self-signed certs to make the LAN demo viable, which itself introduced Vite proxy
configuration issues we hadn't anticipated.
React StrictMode and WebSockets are enemies. StrictMode's mount→cleanup→remount cycle
caused a ghost reconnection timer that opened a second WebSocket connection, making every
broadcast dispatch twice. The fix — an isCancelled guard in the effect closure — is two
lines of code that took two hours to diagnose.
What We Learned
Building something grounded in a real disaster (the Park Fire, Yolo County geography, actual CAL FIRE protocols) changed the texture of every design decision. It's easy to build a toy dispatch system. It's much harder to build one where the HOLD protocol reflects §4.2 of a real firefighting manual, where vulnerable zone scores reflect real demographic data, and where the AI never speaks directly to a caller — because that's how real dispatch works.
We also learned that human-in-the-loop isn't a constraint — it's a feature. The most technically impressive version of this system would dispatch everything autonomously. But the most useful version keeps a human accountable for every heavy asset deployment, every override decision, every mode transition. Clear Dispatch is better because of what it doesn't automate.
Built With
- anthropic-claude
- bash
- claudeapi
- elevenlabs
- fastapi
- geolocation-api
- javascript
- leaflet.js
- python
- tailwind-css
- typescript
- vite


Log in or sign up for Devpost to join the conversation.