Inspiration

Every year, wildfires consume millions of hectares — not because we lack the technology to detect them, but because we're too slow to act on what we see.

In the 2023 Alberta fire season, 2,217,460 hectares burned — 589% above the 10-year average. The scale of that destruction is almost impossible to internalize until you look at the detection lags:

Fire Burned Area (ha) Detection Lag (hrs)
Wentzel Fire 82,117 47.33
HWF109 45,061 47.33
Richardson Complex 28,504 8.15
MWF025 105,251 0.70

The pattern is stark. The relationship between detection lag and final burned area isn't linear, it's exponential. A wildfire spreading at an average rate of v = 7.8 m/min can grow from $1 hectare to over $1,650$ hectares in a single hour:

$$A(t) = A_0 \cdot e^{\,r \cdot t}$$

where r is the fire's radial growth rate and t is time in hours. With a 48-hour detection lag and a 12-hour firefighting response, the model predicts a final burned area approaching $19,000 hectares. With near-real-time response, that number drops below $10,000 hectares.

The numbers are devastating — but they're also an invitation. If delay is the enemy, then speed is the intervention. We built Watchtower because we believed that the right combination of satellite data and ultra-low-latency AI could compress the gap between seeing a fire and fighting it from hours to milliseconds.

Containment data reinforces this urgency:

$$P(\text{containment}) = \begin{cases} 87.31\% & \text{if } \Delta t < 30\ \text{min} \ \ll 87.31\% & \text{if } \Delta t > 30\ \text{min} \end{cases}$$

Every minute of delay is a probabilistic loss. We wanted to change that.


What It Does

Watchtower is a real-time wildfire intelligence platform that ingests raw satellite anomaly data and generates fully validated, multi-agency Tactical Action Plans in under 0.5 seconds.

The system analyzes thermal anomalies, wind patterns, terrain elevation, population density, and historical fire spread data, then instantly produces coordinated response plans for firefighters, evacuation teams, and emergency services. What used to take 8–12 hours of human analysis now happens before a firefighter finishes reading the first alert.

The core latency target is:

$$\tau_{\text{total}} = \tau_{\text{ingest}} + \tau_{\text{classify}} + \tau_{\text{model}} + \tau_{\text{generate}} < 500\ \text{ms}$$

Each stage in the pipeline must contribute only a fraction of that budget, which shaped every architectural decision we made.

Beyond wildfires, the same architecture applies to floods, oil spills, hurricanes, illegal maritime vessels, and critical infrastructure monitoring, anywhere that satellite intelligence needs to become ground-level action, instantly.


How We Built It

Watchtower is powered by the Cerebras Wafer-Scale Engine (WSE), which provides the ultra-low latency inference needed to run multi-step reasoning chains at production speed. Standard GPU inference introduces latency that compounds across pipeline stages — the WSE eliminates that bottleneck.

We built a four-stage AI pipeline:

Stage 1 — Ingestion. Structured satellite anomaly data is pulled from NASA LANCE (latency: 30 min–4 hrs) and ESA Copernicus (median latency: 3 hrs), then normalized into a unified schema alongside environmental context: wind vectors, terrain slope, vegetation density, and population grids.

Stage 2 — Threat Classification. A rapid classifier assigns anomaly type, confidence score c in [0, 1], and severity tier. Only anomalies above the confidence threshold c* = 0.72 advance to full modeling.

Stage 3 — Risk Modeling. Multi-factor spread modeling computes the projected fire perimeter at time t using wind-adjusted spread rates:

$$R(t, \theta) = v_{\text{base}} \cdot f(\text{slope}) \cdot g(\text{fuel}) \cdot h(w, \theta)$$

Stage 4 — Plan Generation. The Cerebras-hosted model synthesizes all upstream outputs into a structured Tactical Action Plan: resource deployment orders, evacuation zone designations, inter-agency coordination instructions, and confidence attribution for every recommendation.

The frontend provides a live situational awareness dashboard with map overlays, threat scores, and exportable response plans.


Challenges We Ran Into

The hardest challenge wasn't speed — it was trust.

Generating a response plan in 0.3 seconds means nothing if an incident commander can't act on it with confidence. We had to design the reasoning pipeline so that every recommendation is traceable, confidence-scored, and explainable, not just fast. We built an attribution layer that surfaces the evidence chain behind each recommended action, so a firefighting commander can interrogate the plan rather than just accept it.

We also wrestled with the multi-modal nature of the input data. Satellite imagery, thermal metadata, wind forecasts, terrain elevation, and population grids have different formats, update frequencies, and reliability profiles. Building a unified ingestion layer that handles missing or conflicting data gracefully — without stalling the pipeline — required significant iteration.

A concrete example: wind data from NOAA updates every hour, but terrain slope is static, and population density is updated annually. The pipeline needs to handle each source's staleness differently. We defined a data freshness weight for each source:

$$w_d(t) = e^{-\lambda_d \cdot (t - t_{\text{last update}})}$$


Accomplishments That We're Proud Of

We're proud that Watchtower demonstrates something concrete: token-per-second speed translates directly into hectares saved and lives protected. This isn't a benchmark — it's an operational argument backed by real fire data.

Achieving a multi-step reasoning chain that ingests real environmental data and outputs a structured, multi-agency action plan was genuinely difficult. We got there.

We're also proud of the generality of the architecture. Watching the same pipeline handle a wildfire in Alberta, an illegal vessel going dark in the Pacific, and a flood vector in Southeast Asia, all with sub-second response, validated that we built something extensible, not just a demo.


What We Learned

Speed without structure is noise.

Early versions of the pipeline were fast but produced plans that were difficult to act on. We learned that output format, how a plan is structured, sequenced, and attributed, matters as much as the content itself when the end user is an incident commander making decisions under pressure.

We also learned that latency is a design constraint, not a performance metric. It has to be embedded in the architecture from the start. Every decision, from how input data is chunked to how multi-agency coordination logic is structured, must be evaluated through the lens of:

$$\Delta\tau_{\text{feature}} \leq \Delta\text{Value}_{\text{feature}}$$

If a feature adds time but doesn't add proportional decision quality, it doesn't belong in the critical path.

Finally, we deepened our appreciation for the Cerebras architecture as a genuinely different class of inference hardware, not just faster GPUs, but a fundamentally different trade-off between memory bandwidth, parallelism, and latency that makes real-time multi-step reasoning viable in ways that weren't before.


What's Next for Watchtower

The immediate next step is live satellite feed integration, moving from structured anomaly descriptions to direct ingestion of raw thermal imagery from NASA LANCE and ESA Copernicus pipelines, processed on-device before transmission.

From there, we want to pursue deployment partnerships with provincial and federal emergency management agencies, starting with high-risk fire jurisdictions in Western Canada and the Western United States.

Longer term, Watchtower becomes a planetary risk monitoring platform, a persistent, always-on intelligence layer over Earth's surface that converts any detected anomaly, anywhere, into an actionable response plan before the next satellite pass completes.

The infrastructure for that future exists today. We built the first version of it.

Built With

Share this project:

Updates