Inspiration

In 2021, the town of Lytton, BC burned to the ground in just 20 minutes.

Current wildfire detection relies on satellites (which are expensive and may not be accessible from everywhere) or cellular IoT (which fails in dead zones). By the time smoke is spotted, the "Golden Hour" for suppression is lost.

What it does

Embers closes this gap. We built a decentralized Mesh network of Edge AI Sentry Nodes and super cheap Relay Nodes that monitor the Wildland-Urban Interface WITHOUT needing the internet.

Sentry Nodes: Placed strategically in the forest, environmental sensors detect pre-ignition conditions (Heat/Dryness/VOCs), nodes placed near the forest floor in high-risk areas for immediate flagging.

The Visual Cortex: Computer Vision detects smoke plumes from higher vantage points. Since cameras and CV consume more power, we can activate them based on environmental sensor values too.

When a threat is flagged, data is blasted over an Offline Ad-Hoc Mesh Network, hopping from node to node until it reaches a base station gateway, alerting first responders instantly. EcoMesh trades expensive satellite delays for affordable, ground-level truth. Since the flagging happens at the site, immediate 'spinal chord' reactions are also programmable.

How we built it

We split our team into two special forces: Team Edge (Hardware/Network/ML) and Team Cloud (Web/Data).

For the hardware, we used Raspberry Pi 4s as our Sentry Nodes in an Ad-Hoc Mesh. Since we couldn't rely on the cloud for processing, we had to build our own Edge AI brains. We used Google Colab to train a custom MobileNetV2 model on a fire dataset, then quantized it into a tiny .tflite file that runs efficiently on the Pi's Arm processor. We also built a "Digital Nose" algorithm that fuses temperature, humidity, and air quality readings to detect pre-ignition conditions before a fire even starts.

For the connectivity, we ditched the router completely. We configured the Pis to talk over a decentralized Ad-Hoc Wi-Fi Mesh, passing JSON packets from node to node until they hit our Gateway (a laptop acting as the bridge). This uploads the telemetry to MongoDB Atlas, which feeds our React dashboard hosted on Railway.

Challenges we ran into

The biggest challenge was a mix of "Dependency Hell" and Hardware Scarcity.

On the software side, we started with the newest version of Python (3.13), only to realize that critical libraries like TensorFlow Lite and OpenCV weren't compatible yet. We spent hours fighting NumPy 2.0 conflicts and "Module Not Found" errors, eventually having to nuke our entire environment and rebuild it on Python 3.11 just to get the AI engine breathing.

On the hardware side, we were running on fumes. We didn't have enough Raspberry Pis to build the full-scale mesh we envisioned, and we completely lacked a USB webcam for our Sentry Node. But we believed so strongly in this idea that we refused to pivot to a simple software project.

We got scrappy. We turned an Android phone into a "Sensor Module" to act as our camera. We built "Virtual Input Interfaces" to inject synthetic data into the Pis to prove the logic worked even when we ran out of sensors. We also hit a wall discovering that modern Android blocks Ad-Hoc mesh connections, forcing us to re-architect our entire network topology on the fly. It was a chaotic fight against resources, but the system works.

Accomplishments that we're proud of

We are incredibly proud that we got True Edge Inference working. It’s one thing to call an OpenAI API; it’s another thing to have a Raspberry Pi identifying a fire in 30 milliseconds while completely offline. We’re also proud of our "Hybrid Detection" logic. We managed to code a system that sleeps in a low-power mode (monitoring sensors) and only wakes up the power-hungry camera when it "smells" smoke. It felt like we were building a real, deployable product rather than just a hackathon toy.

What we learned

We learned that hardware is hard. In software, if code breaks, you press undo. In IoT, if a sensor driver fails or a mesh link drops, you have to physically debug the signal with a multimeter or packet sniffer. We also learned a ton about Quantization—taking a massive neural network and shrinking it down to run on a credit-card-sized computer without losing accuracy. It gave us a huge appreciation for the power of Arm processors at the edge.

Beyond the code, we learned the art of complex system integration. Getting an Android phone (Java) to talk to a Raspberry Pi (Linux) via Bluetooth, which then talks to another Pi via Ad-Hoc Wi-Fi, which finally talks to a Windows Laptop via Ethernet, was a crash course in networking protocols. We realized that the "glue" code that connects these devices is just as critical as the AI model itself.

Finally, we learned that constraints breed creativity. Lacking a physical webcam felt like a disaster at first, but it forced us to engineer a robust "Virtual Input" system to simulate video feeds. This actually made our testing pipeline better and more rigorous than if we had just plugged in a camera and hoped for the best.

What's next for Embers

The prototype proves the concept, but for the real world (like the forests of BC), we need more range. The next step is swapping the Wi-Fi Mesh for LoRaWAN, which would extend our range from 100 meters to 10 kilometers per node. We also want to move from Raspberry Pis to microcontrollers (like the ESP32) for the sensor nodes to get the battery life up from hours to months, allowing for a true "set and forget" solar-powered deployment.

Built With

Share this project:

Updates