Emergency Copilot
Inspiration
Emergency response today is reactive and often blind to what is happening on the ground in real time. Dispatchers rely on fragmented, verbal reports while critical events unfold visually without context. We wanted to bridge this gap by transforming passive video streams into structured, real-time intelligence, enabling faster and more informed emergency response.
Emergency Copilot was built to give dispatchers immediate situational awareness using AI vision, live video streaming, and automated event timelines — without requiring bystanders to manually explain what’s happening.
What It Does
Emergency Copilot converts live video into actionable emergency insights:
- Continuously monitors passive video feeds for anomalies using AI vision.
- Detects emergency signals (accidents, fire, weapons, medical events).
- Captures snapshots when anomalies are detected.
- Streams snapshots to the backend via WebSockets.
- Uses an AI agent to generate real-time incident timelines from snapshot descriptions.
- Groups related events based on time, type, and location into structured incidents.
- Streams live video, timelines, and incident updates to a dispatcher dashboard.
- Visualizes incidents on a live map view.
- Updates dispatchers dashboard instantly via WebSockets and Server-Sent Events (SSE).
How We Built It
Overshoot anomaly detection + snapshotting
We use the Overshoot AI vision SDK to continuously analyze live video feeds. When anomaly scores exceed thresholds, the system transitions from passive monitoring to high-frequency snapshotting (1-second intervals).
Snapshots are streamed via WebSockets directly to the backend API and associated with an active video session.
Gemini Timeline Agent from snapshots
The backend hosts a Gemini-powered AI agent that ingests snapshots directly from the WebSocket stream.
As snapshots arrive, the agent:
- Interprets textual context from each frame description
- Maintains temporal state across snapshots
- Generates structured, chronological timeline events describing change between snapshot states
Timeline events are continuously appended as the incident evolves, producing a live narrative rather than a static summary.
Video Streaming with LiveKit
Live video is streamed using LiveKit (WebRTC):
- Passive video publish video streams to LiveKit rooms once anomaly detection is triggered.
- Dispatchers subscribe to rooms for low-latency playback.
- Tokens are generated securely by frontend API routes.
Video streams remain synchronized with timeline updates so dispatchers can watch events unfold while reading AI-generated context.
Dispatcher Dashboard & Incident Management
The dispatcher dashboard is the core interface for Emergency Copilot and is built with Next.js + React.
Key dashboard features include:
Incident Feed:
Incoming events are automatically grouped into incidents. Each incident represents a single unfolding emergency with its associated videos.Live Incident Timeline:
As snapshots arrive and are processed by the backend Gemini agent, timeline entries appear in real time, giving dispatchers a continuously updating narrative of the situation in each video.AI Summary: An AI-generated video summary continuously captures all observed activity so far into a single, high-level description. It updates in real time as new snapshots arrive, providing immediate situational context without requiring review of the full timeline or video. The summary is optimized for rapid triage, highlighting emergency type, visible hazards, involved individuals, and overall severity at a glance.
Map View:
Active incidents are plotted on a live map, allowing dispatchers to quickly assess location, proximity, and clustering of emergencies.Live Video Player:
Dispatchers can view the associated LiveKit video stream for any active video, synchronized with timeline updates.Real-Time Updates:
WebSockets and SSE are used to push new incidents, timeline updates, and state changes instantly without page refreshes.
This allows dispatchers to move seamlessly between incidents and inspect their associated videos while maintaining situational awareness across multiple emergencies.
Real-Time Backend Infrastructure
The backend (emergency-copilot-api) provides:
- WebSocket ingestion for snapshots and anomaly signals
- A Gemini-based AI agent for real-time timeline generation
- Incident grouping and state management
- PostgreSQL for persistent storage of incidents and timeline events
- SSE endpoints for live dashboard updates
- REST APIs for incidents, timelines, snapshots, and video metadata
The frontend (emergency-copilot) consumes these streams to render a responsive, real-time dispatcher experience.
Challenges We Ran Into
- Coordinating LiveKit (WebRTC), WebSockets, and Server-Sent Events while keeping video, snapshot ingestion, and UI updates synchronized.
- Reconstructing coherent incident context from Overshoot’s sparse anomaly signals and isolated snapshots.
- Designing a Gemini-based AI agent that reasons incrementally over streaming data rather than static inputs.
- Working with partially structured and evolving JSON data across the backend and frontend.
- Sourcing realistic emergency video footage that matched our detection pipeline and supported a compelling live demo.
What We Learned
- Building AI agents for live systems requires maintaining state and reasoning incrementally rather than relying on batch inference.
- Real-time video, vision signals, and UI updates need clearly defined communication boundaries to remain understandable and debuggable.
- High-level summaries and timelines serve different cognitive purposes and both are necessary for effective situational awareness.
- Flexible data handling is essential when working with AI-generated and partially structured outputs.
- Designing for emergency scenarios prioritizes clarity, latency, and information hierarchy over feature complexity.
What’s Next
- Persisting live video streams to durable bucket storage after sessions end for replay, review, and training.
- Improving the AI agent’s temporal reasoning over streamed snapshot descriptions to better handle long-running and ambiguous incidents.
- Supporting multi-camera and multi-angle live video streams fused into a single unified incident view.
- Adding prioritization and triage signals to help dispatchers focus on the most critical incidents first.
- Expanding anomaly detection to additional emergency scenarios and environmental hazards.
- Integrating with real-world emergency dispatch and CAD systems for operational deployment.
Built With
- drizzle
- express.js
- gemini
- nextjs
- overshoot
- postgresql
- react
- typescript
Log in or sign up for Devpost to join the conversation.