Inspiration

Canada has recently faced some of the most devastating wildfire seasons in its history, from the massive forest wildfires in Quebec to the recurrent infernos in British Columbia and Alberta. These crises have pushed our first responders to the absolute brink, forcing them into incredibly hazardous, zero-visibility environments, both deep in the Canadian wilderness and in residential structures threatened by the blaze. Disorientation in these smoke-filled environments is a leading cause of line-of-duty casualties, and incident commanders outside often lack real-time visibility into the physical and mental stress of their teams.

Tackling the HackCanada theme of solving critical issues faced by Canadians, we were inspired to build Ember: a system that gives Canadian responders the ability to see through the smoke and provides commanders with the mission-critical telemetry they need to bring everyone home safely.

What it does

Ember is an AR-powered Heads-Up Display (HUD) and command center dashboard designed for structural firefighting and hazardous search-and-rescue operations:

  • For the Responder (FireSight UI): It provides an augmented reality overlay that acts as a spatial guide in zero-visibility smoke. It also uses edge-based machine learning to draw bounding boxes around objects in their immediate field of view.

  • For the Commander: A real-time WebRTC-powered dashboard. It streams the responder's live POV camera feed, overlays their ML detections, and tracks their telemetry (modeling biometrics like heart rate, blood oxygen and skin temperature) measured via a smartwatch companion so the commander knows exactly what the responder is enduring.

How we built it

We architected Ember as a full-stack, AI-driven spatial application built to withstand the chaotic nature of fire response:

- Frontend: Built with Next.js, leveraging WebXR to render the immersive AR breadcrumbs and HUD interface. We also built a custom robust fallback mode utilizing device sensors for situations where native AR tracking is degraded.

- Backend: A high-performance FastAPI (Python) server handles the complex spatial mathematics (haversine distances, bearing/heading calculations) and coordinates the telemetry state.

- Biometrics & Health: We developed a WearOS application using Kotlin to monitor real-time physiological data (heart rate, blood oxygen and skin temperature) to the command dashboard, ensuring commanders know exactly when a responder is physically maxed out.

- Hardware/Vision AI: Real-time detections are processed using a Vultr-hosted model to identify critical objects (exit doors, fallen cabinets, medical equipment) in the responder's field of view.

Challenges we ran into

One of our biggest hurdles was ensuring the AR navigation remained reliable. Native AR tracking (WebXR) can easily lose tracking in featureless, smoke-filled, or erratic environments. We had to pivot and engineer our own robust fallback mode using raw device sensors and complex spatial math to ensure our navigational arrows remained accurate relative to the user's heading. Additionally, managing peer-to-peer WebRTC connections seamlessly within a Next.js environment, while simultaneously running TensorFlow.js object detection on the same video stream, required careful state management and performance optimization to prevent the UI from stalling.

Accomplishments that we're proud of

We are incredibly proud of successfully running ML object detection strictly on the edge (the responder's browser) with near-zero latency, whilst simultaneously exchanging signaling data and streaming that exact video feed over WebRTC to the commander.

What we learned

Building a system meant to operate in the chaotic epicenter of a Canadian wildfire taught us that bleeding-edge technology must be matched with bulletproof reliability. We learned that relying on standard AR frameworks often isn't enough for extreme conditions, forcing us to dive deep into the underlying math of device sensors to build our own robust navigational fallbacks. We also gained extensive experience in WebRTC peer-to-peer streaming and managing complex state between a high-frequency Python math backend and a 3D-rendered React frontend.

What's next for Ember

While our immediate focus has been structural fires and woodland infernos, we envision Ember scaling to protect Canadians across multiple critical sectors:

API Integrations: We plan to integrate the standalone prototypes we built during the hackathon into the core application loop, specifically merging our Presage Technologies Physiology API for real live biometrics, Vultr-hosted vision models for enhanced hazard detection, and Gemini/ElevenLabs for synthesized voice intelligence. Expanded Canadian Hazard Detection: Training our edge vision models to identify specific environmental hazards unique to Canadian topography, such as unstable permafrost edges or weakened timber in forestry infrastructure. Winter Search & Rescue Adaptation: The spatial return arrows we built are universally applicable. We plan to adapt the system for extreme-cold Search and Rescue operations in Northern Canada to guide lost individuals or rescuers through blinding whiteout conditions.

Built With

Share this project:

Updates