Inspiration

Sentinel Dispatch was inspired by climate emergency headlines—wildfires, extreme weather, and the need for faster emergency response. The goal was to use AI to help dispatchers prioritize 911 calls during crises.

What it does

This system integrates multiple data streams (emergency calls, weather, fire spread) to provide dispatchers with continuously updated risk scores during rapidly evolving wildfire events. This is a proof-of-concept implementation focusing on the Lahaina, Maui wildfire event (August 2023). The calls were generated using Gemini based on the 2023 Maui wildfire event, while the historical weather and fire data for Maui were sourced from FIRMS and NOAA APIs.

How we built it

The system processes emergency calls through a pipeline:

  1. Call Processing Agent (Java/Flink) classifies calls using Gemini NLP
  2. Data Enrichment Agent (Java/Flink) enriches calls with spatial-temporal weather and fire data
  3. Risk Scoring Agent (Java/Flink) calculates multi-factor risk scores
  4. Alert Generation Agent (Java/Flink) generates contextual alerts
  5. Python FastAPI backend serves ML services and provides a dashboard
  6. React frontend displays prioritized cases on an interactive map

Challenges we ran into

  1. Pivoting from Python to Java Agents: Initially planned Python streaming agents, but submitting them to a containerized Flink job manager failed. With limited time, I switched to Java agents, which integrated more reliably with Flink's runtime.

  2. Gemini Rate Limits: With 20 requests/day, each prompt iteration was costly. I optimized prompts, and tested carefully to avoid hitting limits during development.

  3. Vertex AI Scope Reduction: I wanted to train custom models, but Vertex AI requires at least 1,000 training records. Without sufficient data, I removed it from scope and focused on rule-based risk scoring with Gemini for classification.

These constraints shaped the architecture: Java Flink agents for reliability, Gemini for NLP, and a rule-based risk scoring system that works with limited training data.

Accomplishments that we're proud of

  • Built an end-to-end real-time emergency dispatch system processing calls from ingestion to visualization
  • Integrated Apache Flink streaming agents with Python ML services via async HTTP, handling concurrency and errors
  • Implemented multi-factor risk scoring combining urgency, fire proximity, weather, and vulnerability
  • Created a real-time dashboard with interactive map, WebSocket updates, and prioritized case management
  • Pivoted from Python to Java Flink agents when integration issues arose, maintaining project momentum
  • Optimized Gemini API usage within strict rate limits (20 requests/day)

What we learned

  • Flink streaming: async HTTP functions, ResultFuture completion patterns, preventing job failures
  • Prompt engineering: structured data extraction, handling edge cases, consistent outputs
  • Event-driven systems: Kafka topic design, consumer group management, offset handling
  • Geospatial processing: spatial-temporal joins, distance/bearing calculations, time window filtering
  • Full-stack integration: connecting Flink agents, FastAPI services, React frontend, and WebSocket updates
  • Working within constraints: adapting to API limits, technology pivots, scope decisions

What's next for Sentinel Dispatch

  • Production readiness: Confluent Cloud migration
  • Enhanced AI/ML: Vertex AI integration
  • Expanded data sources: GPS (emergency vehicle telemetry) and road status traffic data
  • Multi-hazard support: extend beyond wildfires to hurricanes, floods, earthquakes
  • Advanced features: resource optimization
  • Performance: horizontal scaling caching, PostgreSQL migration
  • Security: compliant PII management
Share this project:

Updates