Inspiration
Last year, wildfires caused 225 billion euro in direct damage across the world. Wildfires are becoming increasingly common, yet in 2025 the EU cut their wildfire response budget by 50 percent, from €1.7 billion to 1.2!
Meanwhile, the leading cause for wildfire ignition is through a dry lightning strike that hits dry terrain before any rain does.
Tens of lives were lost last year as a result, as firefighters often took over 4 hours to arrive on scene.
So we asked ourselves, what if we could have stopped the dry lightning before it even started?
Take a look at [https://zerostrike.live](https://zerostrike.live to see more!
What it does
Zerostrike is a proactive wildfire remediation solution powered by autonomous drones controlled by agents. We call ourselves "the Palantir of wildfires". We've trained machine learning models on accurately predicting where "dry lightning" is incident, and built the infrastructure and hardware to be able to autonomously deploy drones to the area of risk, to deploy cloud seeding solution. This highers the likelyhood of rainfall, preventing disasters before they happen.
We pull data from tens of different sources: NASA satellites, ESA geospatial datasets, a decade of fire history data, moisture readings, and combine them into a single data layer updated every hour. Our system also tracks live weather data on the position, radius, speed, and direction of storm cells. We built a prediction model based on that data that calculates a land risk, strike probability, and consequence score to identify highest risk areas prone to dry lightning strikes. Then we connect it to Claude API agent orchestration layer. The Claude agents have access to the entire platform - map data, live incoming data, the drone feed, the satellite imagery, and based on that it’s able to make autonomous decisions. The moment a storm cell trajectory intersects a red zone, our system flags it, calculates which drone in our fleet can intercept fastest, and generates a dispatch route in under three seconds. And we didn’t just build software. We built the hardware to prove our idea works too. We feed a series of waypoints to the drone, which responds instantly and flies to where the storm will be over a high risk area. The servo we mounted onto our drone then releases cloud seeding chemicals, which cause clouds to form and triggering rainfall over the high risk area.
How we built it
Prediction Engine: Eniola trained an XGBoost model on 28,000 real fire ignitions from the 2020 California Lightning Complex, correlated with ERA5 atmospheric reanalysis data. The model predicts ignition probability from atmospheric conditions (CAPE, dewpoint depression, cloud base height, relative humidity), terrain slope, and vegetation dryness (NDVI). It validated at 78% precision on held-out data, and when retrained with real ERA5 weather features, the fire-weather signal nearly doubled in importance, confirming it learns real physics.
Platform & Agents: Emmanuel and Julian built a three-layer engine: a fuel risk scorer (vegetation dryness + terrain slope + fuel type), an atmospheric scorer (CAPE, dewpoint depression, cloud base height, humidity, precipitation efficiency), and a consequence scorer (population proximity + infrastructure density). These combine into a composite severity grid at 0.05-degree resolution (~5.5 km cells). Storm cells are projected forward over a 6-hour horizon using haversine kinematics, and collision detection flags where projected storms intersect high-severity terrain. The whole pipeline outputs prioritized GeoJSON threat zones served via a Flask REST API on Firebase Cloud Functions.
Agentic Layer: Claude powers the autonomous decision loop. The agent has full access to the live threat map, storm trajectories, drone fleet telemetry, and satellite imagery. When a collision is detected, it autonomously selects the optimal drone, calculates an intercept route, and dispatches it, replacing the multi-hour human decision bottleneck.
Frontend: A mission-control-style dashboard built with React, Vite, Tailwind, and Mapbox GL. Multi-layer tactical map with land risk zones, storm cells, trajectory projections, collision overlays, and drone positions. Real-time polling with silent fallback to deterministic mock data for demo reliability.
Hardware: Aditya built the drone integration layer. Since DJI exposes no direct API, we built a custom Android app on DJI MSDK v5 that intercepts the phone-to-remote control link, injecting programmatic waypoint missions. An ESP32-controlled servo mechanism mounted on a DJI Mini 4 Pro releases cloud seeding payload at calculated release points. Firestore acts as the real-time message bus bridging the dashboard, the Android app, and the ESP32 across separate networks.
Challenges we ran into
Hijacking the DJI Control Link: The DJI drone doesn't expose a direct API :all commands flow through the physical remote controller paired to a phone. We had to build an Android app on top of DJI's MSDK v5 that intercepts this phone-to-remote link, injecting programmatic waypoint missions and virtual stick commands while maintaining the SDK's internal state machine. This effectively turns a consumer drone into an autonomous platform.
Cross-Network Command Bridge: The drone controller (Android app on the pilot's phone), the web dashboard, and the ESP32 payload dropper all run on separate networks. We used Firestore as a real-time message bus: the dashboard writes mission commands, the Android app listens and dispatches them to the flight controller, and telemetry flows back. The ESP32 polls a Cloud Function for drop commands, with the 1s poll interval doubling as a cold-start keep-warm.
Accomplishments that we're proud of
Getting the whole platform up and functional, having the drone system completely integrated with the rest of the platform. Developing custom hardware (the dropping mechanism) and hooking it up to this firestore message bus while remaining as light as possible. We're proud to have developed everything from ML models, Beautiful UI, an insightful platform, a mobile app, and custom hardware to execute upon the data to provide a solution.
What we learned
Building across four completely different domains in 30 hours taught us a lot. On the ML side, we learned that feature engineering with real atmospheric data matters far more than model complexity: when we swapped synthetic weather features for real ERA5 reanalysis data, model performance jumped significantly. On the hardware side, we learned that consumer drones like the DJI Mini 4 Pro aren't designed for programmatic control, and that bridging the gap between a locked-down SDK and autonomous flight requires creative workarounds involving Android MSDK interception and Firestore as a cross-network command bus. We also gained a deep appreciation for how much domain knowledge matters: understanding the physics of dry lightning, charge separation, and cloud seeding made the difference between building something that sounds cool and building something that's actually defensible. Finally, we learned that the individual technologies for wildfire prevention all exist already. The missing piece is the orchestration layer that connects prediction to action, and that's what we built.
What's next for Zerostrike
The immediate next step is integrating real-time data feeds: replacing our synthetic data provider with live NOAA HRRR weather models, NASA FIRMS fire detection, and Copernicus Sentinel NDVI. The provider pattern in our engine is already built for this swap. Beyond that, we want to connect with XPRIZE Wildfire finalist teams who are building the autonomous suppression hardware (drone swarms, retrofitted helicopters) but lack an intelligent dispatch layer. ZeroStrike is the brain that coordinates those arms and legs. Longer term, we're exploring integration with emerging technologies like the EU Laser Lightning Rod project (University of Geneva, Nature Photonics 2023) and corona discharge arrays as next-generation lightning suppression mechanisms that our orchestration backbone could coordinate.
Log in or sign up for Devpost to join the conversation.