Inspiration
The FIREWISER platform is a direct response to the escalating catastrophe of the 2025 Pacific Palisades wildfires, which exposed a critical gap in contemporary emergency management—the failure to transition from reactive alerting to predictive, personalized resource triage.
With verified damage including the incineration of over 23,000 acres, the destruction of 6,837 structures, a tragic loss of 12 lives, and broader economic losses confirmed to reach up to $131 billion, the imperative is clear: conventional systems, reliant on static data, cannot manage cascading, dynamic risk.
FIREWISER solves this by creating an Intelligent Digital Twin of the evacuation environment, prioritizing not just proximity but personalized, behavioral vulnerability.
FIREWISER is an expert system designed to orchestrate a complex emergency response. It combines a robust client-side architecture, advanced geospatial visualization, and a strategic dual-AI model to deliver unparalleled clarity and control to both citizens in peril and the heroes dedicated to protecting them.
What it does
The FIREWISER Architecture is engineered on a multi-layered, hybrid model that strategically combines the immense analytical power of cloud-based AI with the speed and privacy of on-device AI optimized for Arm. This dual approach ensures real-time performance, deep personalization, and actionable operational intelligence within the high-stakes environment of a critical incident.
At the core of the evacuee experience is the Evacuee Digital Twin, powered by the Google AI Platform's Gemini 2.5 Flash model. Its primary cognitive role is Hyper-Personalized Guidance Generation.
The system employs a sophisticated zero-shot prompting strategy, where Gemini ingests a user's specific demographic profile—such as "Parents with Young Children"—and instantly generates a highly-structured JSON object. The true novelty lies in the strict enforcement of a responseSchema, which compels the LLM to function not as a mere text generator, but as a reliable policy engine. This ensures the output, containing both a tailored evacuation checklist and critical psychological coaching messages, is immediately machine-readable, populating the user interface without any complex parsing.
This approach creates actionable, context-aware guidance designed to mitigate panic and improve evacuee compliance under duress. Simultaneously, the system performs Real-Time Spatio-Temporal Hazard Forecasting to model risk and generate safe routes. This component leverages the Google Maps Platform and a simulated Vertex AI backend. It integrates multiple data layers onto a high-tilt satellite map, including live, hyperlocal environmental data fetched from the Google Air Quality API.
The key innovation is the visualization of an "AI Predictive Boundary," a polygon representing the fire's anticipated trajectory. This boundary simulates the output of a powerful Vertex AI-hosted geospatial model, enabling the app's Custom SafeRoute to be plotted not just based on current road closures, but by intelligently navigating around future high-risk zones.
Finally, the Command Center AI Orchestration component handles Multi-Agent Triage & Tactical Asset Optimization through a hybrid AI approach. First, the Dynamic Triage Overlay uses a cloud-based Gemini model to simulate the processing of a high-throughput, anonymized data stream of evacuee locations. The AI performs real-time cluster analysis, distilling complex movement patterns into clear "En Route" vs. "Safe" metrics, providing commanders with essential triage intelligence without exposing any personally identifiable information (PII). Second, the system features a core Air Asset Optimization algorithm (getOptimalDropZone) that analyzes the positions of at-risk evacuees relative to the fire's predicted path. It calculates the geometric centroid of the most vulnerable cluster, identifying the single most effective drop zone for fire retardant to protect the maximum number of lives.
This is all brought together through Hybrid Command Execution. For the evacuee, the app utilizes Chrome's on-device AI for instant, private summarization and rephrasing of checklist items. For the commander, a Cloud AI simulates a "vetting" process, transforming a natural language command into a standardized, error-checked tactical directive, ensuring absolute clarity in a high-stakes environment.
How FIREWISER Saves Lives
Hyper-Personalized Guidance Generation: It saves lives by using the Gemini AI to generate a structured, tailored evacuation plan (the Evacuee Digital Twin) instantly based on a user's profile (e.g., "Parents with Young Children"), providing critical, context-aware instructions designed to mitigate panic and improve compliance under duress.
Real-Time Hazard Forecasting and Safe Routes: The system models the "AI Predictive Boundary" (the fire's anticipated trajectory) and uses the Google Maps Platform to plot a Custom SafeRoute that intelligently navigates around future high-risk zones, not just current closures, preventing people from driving into danger.
Multi-Agent Triage & Tactical Asset Optimization: For responders, the system uses cluster analysis to create a Dynamic Triage Overlay that pinpoints the most vulnerable, high-risk groups of evacuees. It then uses the getOptimalDropZone algorithm to calculate the single most effective location for fire retardant drops, ensuring maximum protection of human lives by focusing resources where they are needed most.
How we built it
From an architectural standpoint, FIREWISER is engineered as a high-performance, Arm-optimized, client-centric Progressive Web Application (PWA). The fundamental design principle was to push as much logic and rendering to the edge (the user's device) as possible, ensuring maximum resilience and high energy efficiency on Arm mobile processors.
System Architecture & Frontend Philosophy: Client-Centric PWA for Arm-Powered Devices
Core Framework: We selected React with TypeScript not merely for componentization, but to enforce a strict, predictable state machine model for the UI. In a critical application, type safety is non-negotiable; it establishes a durable contract between our frontend components and the diverse data sources we orchestrate (Gemini, Google Platform APIs, on-device AI), mitigating an entire class of runtime errors. State Management Strategy: The current implementation leverages component-level state (useState within App.tsx) as a deliberate choice for this proof-of-concept's linear user flow. This minimized boilerplate and maximized iteration speed. For a production deployment, we have architected a transition to a more scalable solution like Zustand or Redux Toolkit. This would be necessary to manage complex, cross-cutting state, such as synchronizing the real-time positions of hundreds of assets on the map with backend updates via WebSockets, without prop-drilling or performance degradation.
Hybrid Rendering Model for High-Performance Visualization: The Map.tsx component is the system's performance centerpiece. We implemented a critical architectural pattern: separating state management from high-frequency rendering. React's reconciliation loop is used only to manage the lifecycle of map elements (e.g., creating or destroying a marker). All high-frequency updates, such as the 60fps animations of fire, evacuee pings, and asset movement, are handled directly by a requestAnimationFrame loop that mutates the properties of the underlying google.maps.Marker objects on the mobile device's GPU (typically Arm Mali or a compatible chip). This avoids overwhelming React's virtual DOM, a common bottleneck in data-visualization-heavy applications, ensuring a fluid and responsive tactical overview even under heavy load.
Multi-Tiered AI Orchestration: A Hybrid Strategy for Arm Efficiency
The core innovation of FIREWISER is its strategic, multi-tiered AI architecture. We don't use a monolithic AI; we orchestrate a fleet of specialized models, deploying the right tool for the right job based on requirements for latency, privacy, power, and cost.
Tier 1: Cloud AI (Google Gemini & Platform APIs): This tier is reserved for tasks requiring massive computational power and access to authoritative, global datasets. Our use of the Gemini API in aiService.ts is highly disciplined. We treat it not as a generative chatbot, but as a structured, on-demand data transformation service. By enforcing a rigid responseSchema and setting the responseMimeType to application/json, we compel the LLM to behave as a predictable API endpoint. This is the single most important technique for ensuring reliability and eliminating the risk of malformed or "creative" responses in a life-or-death context.
Tier 2: Arm-Optimized On-Device AI (Chrome's Built-in AI - window.ai): Leveraged for tasks demanding zero-latency and absolute user privacy. The summarization and rephrasing features in Step4a_Checklist.tsx are executed entirely on-device via chromeAiService.ts. This is a strategic choice for Arm architecture optimization: the underlying Web AI frameworks are designed to compile and run their models using the most efficient native instructions provided by the host OS. On mobile devices, this means the execution is transparently accelerated by the Arm processor's dedicated compute units (CPU NEON SIMD, integrated Mali GPU, or dedicated NPU). This dramatically reduces latency and ensures maximum power efficiency, which is essential for preserving the battery life of evacuees’ devices. Crucially, any potentially personal data inferred from their checklist remains on their machine, a critical privacy consideration.
Tier 3: High-Fidelity Client-Side Simulation: The getOptimalDropZone function in aiService.ts is a deliberately architected, high-fidelity simulation of a future backend ML model. This approach was a strategic decision to decouple frontend and backend development. It allowed our UI/UX team to build and refine the complete end-to-end Air Operations command workflow in parallel with the backend team's model development, dramatically accelerating our iteration cycle.
Challenges we ran into
Maintaining 60fps Map Rendering Under Load: Our initial implementation, which naively tied map marker positions to React state, resulted in catastrophic frame drops when displaying more than a few dozen animated evacuee dots. The UI became unusable. Resolution: We engineered a "marker manager" pattern within the Map.tsx component. We now maintain a Map object (commandMarkersRef) that holds direct references to the google.maps.Marker instances. Our animation loop bypasses React entirely, using marker.setPosition() to directly mutate marker properties on the GPU-accelerated map canvas. This architectural shift was the key to achieving a fluid, scalable visualization.
Mitigating User Anxiety During Asynchronous Operations: A blank loading screen is unacceptable in an emergency application; it induces panic. The challenge was to transform a necessary delay into a confidence-building experience. Resolution: We designed the Step3_Loading.tsx component as a "choreographed narrative." While Promise.all initiates all data fetches in parallel for maximum efficiency, the UI presents a timed sequence of status updates. This creates a powerful illusion of a deliberate, multi-step process, which gives the user a sense of control and builds trust that a robust system is working on their behalf. This focus on perceived performance is a critical piece of UX engineering for high-stress environments.
Hardening a Generative AI for Mission-Critical Use: The inherent non-determinism of LLMs poses a significant risk. An API failure or a malformed JSON response could render the application useless at the worst possible moment. Resolution: We implemented a defense-in-depth strategy. Primary Defense: Strict schema enforcement on our Gemini API calls, as detailed above. Secondary Defense: The catch block within our generateWiseOutput function in aiService.ts is a critical safety net. If the Gemini API call fails for any reason (network error, API key issue, timeout, malformed response), the system immediately and seamlessly falls back to the pre-vetted, static ROLE_BASED_GUIDANCE. This ensures the user always receives a safe, actionable evacuation plan, guaranteeing high availability for the core feature.
Accomplishments that we're proud of
The Real-Time Dynamic Triage Overlay: This is far more than dots on a map; it's the successful engineering of an actionable intelligence layer from raw, anonymized data. The system's ability to process and clearly visualize the flow of evacuees from "en-route" to "safe" provides an unprecedented level of real-time situational awareness, enabling commanders to allocate resources with precision.
The Multi-Tiered AI Orchestration Engine: We are particularly proud of the successful integration of cloud, Arm-accelerated on-device, and simulated AI into a single, coherent application. This hybrid architecture represents the future of intelligent application design, demonstrating how to strategically balance immense power (cloud), instantaneous, efficient response (on-device on Arm hardware), and development agility (simulation).
Achieving a "Cinematic" UX Without Sacrificing Performance: We made a conscious decision that the user interface should be not just functional, but also intuitive and confidence-inspiring. By investing in high-fidelity custom SVG animations, a tilted 3D map perspective, and dynamic, pulsating data layers, we created a "command center" experience. This cinematic quality enhances a commander's ability to rapidly parse complex information. The underlying performance optimizations were the key engineering accomplishment that made this possible.
What we learned
Treat LLMs as Constrainable Services, Not Oracles: Our most significant takeaway was the need to architecturally constrain generative AI. The shift from open-ended prompting to demanding structured JSON output via a schema was a pivotal moment. It taught us that to build reliable systems, we must treat LLMs as incredibly powerful but fuzzy microservices, and it is our responsibility as engineers to define a rigid contract (the schema) that they must adhere to.
Separate State Logic from High-Frequency Rendering Logic: The performance challenges with Map.tsx ingrained this principle in our team. A React re-render should be triggered only by a change in the application's logical state. Purely visual phenomena that update at 60fps, like animations or pulsing effects, must be offloaded to browser-native APIs that leverage the Arm GPU directly (requestAnimationFrame, CSS animations) to prevent the VDOM from becoming a bottleneck.
In High-Stress UX, Perception is Reality: Engineering the loading sequences taught us that how you communicate progress is as important as the progress itself. A well-choreographed UI that provides a clear narrative of the work being done builds immense user trust and reduces anxiety, directly impacting the application's effectiveness.
What's next for FIREWISER
The current application is a highly successful validation of our core architecture. The next phase focuses on scaling this to a production-grade, globally deployable system.
Transition to a Real-Time Backend Infrastructure: We will replace all client-side structure with a robust backend on Google Cloud. The architecture will involve a high-throughput API Gateway feeding evacuee location data into a Pub/Sub topic. A Dataflow pipeline will consume this stream, performing real-time anonymization and aggregation (e.g., snapping points to S2 cells to protect privacy) before broadcasting updates to the command view via WebSockets. Geospatial-temporal analytics will be run in BigQuery.
Evolving to Multi-Modal AI Reasoning: Dynamic, Predictive Routing: We will deprecate the static SAFE_ROUTE_PATH. The next iteration will feature a backend Gemini model that continuously ingests real-time traffic data, weather patterns, and user-submitted hazard reports to calculate and dynamically push updated, optimal evacuation routes to each individual user's device.
Live Video Analysis: We will enable commanders to feed live drone footage into a multi-modal Gemini endpoint. The model will perform real-time object detection to automatically identify and map new fire perimeters, compromised infrastructure, and stranded individuals, adding them as a dynamic intelligence layer to the tactical map.
Implementing a Closed-Loop Communication & Hazard Reporting System: The system will evolve from a one-way information broadcast to a two-way communication network. Commanders will be able to draw a geofence on the map and send targeted push notifications to all users within that zone. Conversely, evacuees will be able to submit reports (e.g., "Road Blocked at Main & 1st") with photos. A multi-modal AI will vet these reports, and upon confirmation, automatically update the map and re-route other users around the new hazard, creating a self-healing, community-powered safety network
Log in or sign up for Devpost to join the conversation.