Inspiration
Wildland firefighter fatalities have increased 500% over three decades — from 2% to 10% of total firefighter deaths. The fires are more frequent and more deadly. Burn-related fatalities jumped from 9% to 27%, and 60% of entrapment fatalities occur on just 3% of fire weather days — days when sudden wind shifts and extreme fire behavior catch crews off guard.
The primary defense against wildfire isn't water. It's fireline construction — crews of hotshot firefighters clearing vegetation down to mineral soil to starve the fire of fuel. They work in extreme heat, with limited visibility, making split-second decisions about what to clear and when to evacuate. They have no real-time intelligence on fire behavior, no physiological monitoring, and no AI assistance telling them which fuel sources to prioritize.
We asked: what if a firefighter could see the fire's future? What if their helmet could tell them which brush to clear first, warn them when their body is overheating before they feel it, and calculate an escape route when the wind shifts?
What it does
ForeSight is an immersive AR heads-up display for wildland firefighters that combines real-time biometric monitoring, AI-powered fuel classification, predictive fire spread modeling, and intelligent evacuation routing — all delivered through a Meta Quest 3 mounted inside a firefighter's helmet.
Heat Stress Shield — A wearable sensor armband continuously monitors skin temperature, sweat level, and hydration. A helmet-mounted sensor unit tracks ambient temperature, humidity, and acceleration for fall detection. All signals fuse into a Wet Bulb Globe Temperature (WBGT) estimate — the same metric OSHA uses — and escalate through four tiers (green → yellow → orange → red) with spoken voice alerts through the headset.
AI Fuel Classification — The flagship feature. When a firefighter clenches their fist (detected by a Mindrove EMG armband) or pulls the Quest 3 controller trigger, Gemini Vision analyzes the scene and identifies every visible fuel source — dead grass, pine needle litter, fallen branches, chaparral, living brush, trees. Each fuel source is classified by flammability using NWCG fuel model standards and placed as a 3D marker locked in world space. Red markers pulse on the highest-priority targets. The system speaks: "Dead brush, 3 meters at your 2 o'clock. Scrape to mineral soil."
Wildfire Spread Prediction — A Rothermel cellular automata model — the same physics the US Forest Service uses — simulates fire spread across a 64×64 terrain grid overlaid on real satellite imagery. Wind direction, terrain slope, and fuel type all affect spread rate. A time scrubber lets firefighters preview where the fire will be in 10, 20, or 30 minutes. Teammate positions are displayed as green dots on the map.
Intelligent Evacuation Routing — When fire approaches on the simulation map, Dijkstra pathfinding computes the safest exit route accounting for predicted fire positions 10 minutes ahead. A green dashed line appears on the map with a compass widget pointing toward safety. Five-tier proximity alerts escalate from warning to automatic MAYDAY.
Contextual AI Assistant — Qualcomm Cloud AI running Llama-3.3-70B on Cirrascale's AI 100 Ultra accelerators. Every query includes the firefighter's live biometric data, heat stress tier, and fire proximity in the system prompt. The firefighter asks "Should I pull back?" and gets a recommendation synthesizing all available data. DeepSeek-R1 handles trend analysis over rolling sensor history.
EMG Gesture Control — A Mindrove 4-channel EMG armband classifies muscle gestures using a hyperdimensional computing model. Hard clench triggers a fuel scan. Half clench activates MAYDAY, transmitting biometric data and GPS to incident command. Completely hands-free — works through firefighting gloves.
How we built it
The system runs across four connected devices on a single WiFi hotspot:
Two Arduino UNO Q boards (Qualcomm Dragonwing QRB2210 + STM32U585) — one in the helmet reading ambient temperature, humidity (DHT11), and acceleration (Modulino Movement via Qwiic I2C); one on the armband reading skin temperature (NTC thermistor with voltage divider) and sweat level (water level sensor). Both run Arduino sketches on the STM32 side that read sensors via Bridge RPC and Python scripts on the Qualcomm Linux side that POST JSON data over WiFi to the laptop every 500ms.
A laptop running Python — the central brain. A Flask server on port 8443 (HTTPS with self-signed cert for WebXR) receives sensor data from both UNO Q boards, runs the state machine for sensor fusion and heat stress scoring, hosts the Gemini Vision fuel classification endpoint, and serves the entire WebXR HUD as static files. The state machine computes WBGT using the Stull (2011) wet bulb approximation and Liljegren equation, debounces tier transitions across 3 consecutive readings to prevent false alarms, and tracks cumulative thermal exposure.
A Meta Quest 3 — renders a full Three.js WebXR VR environment. A photorealistic World Labs panorama serves as the skybox with billboard vegetation sprites (photographic PNG cutouts of trees, shrubs, and dead brush) placed at varying distances for parallax depth. The fire simulation runs at 1 tick per second entirely client-side in JavaScript. HUD panels are Three.js planes parented to the camera with CanvasTexture rendering — vitals, fire map with satellite imagery overlay, timers, and alerts. Voice alerts use the Web Speech API with a priority queue system and two-tone alarm oscillator for critical warnings. Ambient fire audio (filtered white noise) adds atmosphere.
The Gemini integration uses structured bounding box output (box_2d coordinates normalized 0-1000) with response_mime_type="application/json" — not a chatbot interaction but a spatial object detection pipeline. Each detected fuel source is converted from 2D bounding box to a 3D world position via camera frustum unprojection and placed as a persistent marker in WebXR space, where the Quest 3's 6DOF tracking keeps it anchored regardless of head movement.
The EMG classifier uses hyperdimensional computing — random projection of 4-channel EMG features (RMS, MAV, zero crossings, waveform length) into 10,000-dimensional hypervectors with cosine similarity against trained class centroids. One-second temporal confirmation prevents false positives.
Challenges we ran into
I2C on the UNO Q was a nightmare. The STM32 header pins (A4/A5/SDA/SCL) do not work for I2C under Zephyr OS. We spent hours debugging before discovering that I2C only works through the Qwiic connector. Our MPU9250 breakout board was dead on arrival through the header pins but the Modulino Movement worked instantly via Qwiic.
Serial doesn't work like a normal Arduino. Serial.println() on the UNO Q doesn't go to USB — it routes internally to the Qualcomm chip. We had to learn the Bridge RPC system (Bridge.provide() on the STM32 side, Bridge.call() on the Python side) before any sensor data was visible. The serial monitor in App Lab only shows 9600 baud and the output appears in the Python Shell tab, not the Serial Monitor tab.
ADC instability. Raw analogRead() calls on the STM32 under Zephyr produced wildly fluctuating values (jumping between 0 and 800). The fix: a throwaway read to let the multiplexer settle, then averaging 20 samples with microsecond delays between each. This brought readings from ±400 noise to ±10 stability.
DHT11 timing under Zephyr. The manual bit-bang protocol requires microsecond-precision timing that Zephyr's RTOS preemption disrupts. We implemented fallback values for the demo while keeping the protocol in place for boards where it works.
WebXR requires HTTPS. The Quest 3 browser silently falls back to a flat fullscreen page instead of entering immersive VR if served over HTTP. We generated self-signed certificates on the fly using Python's cryptography library and had to train ourselves to always tap through the browser certificate warning.
The thermistor was wired inverted from every tutorial online. Our voltage divider had the fixed resistor on top and thermistor on bottom (3.3V → 10kΩ → A1 → thermistor → GND), meaning lower ADC values = higher temperatures. The Steinhart-Hart conversion was computing 111°C for room temperature until we flipped the resistance formula.
Accomplishments that we're proud of
Real sensor data flowing through a complete pipeline in real-time. Two Arduino boards reading physical sensors → WiFi → Python state machine computing WBGT and heat tiers → Flask API → Quest 3 browser polling every 2 seconds → Three.js VR HUD displaying live vitals. Every layer works and they all talk to each other.
Gemini Vision doing real spatial object detection. Not a chatbot wrapper — structured JSON output with bounding box coordinates, converted to 3D world-space markers that stay anchored in VR as you look around. This is a genuine computer vision pipeline using a foundation model.
The fire simulation using real physics. The Rothermel spread model with wind vectors, terrain slope, and fuel-type-specific burn rates isn't a toy — it's the same mathematical framework the US Forest Service uses for fire behavior prediction. Combined with Dijkstra evacuation routing that accounts for predicted fire positions 10 minutes ahead.
EMG gesture control with hyperdimensional computing. A fist clench triggers a fuel scan. No buttons, no screens, no removing gloves. The classifier runs a 10,000-dimensional hypervector model trained on calibration data from the user's own muscle signals.
Full Qualcomm stack from edge to cloud. Qualcomm Dragonwing QRB2210 hardware at the edge processing biosignals, Qualcomm Cloud AI (Llama-3.3-70B + DeepSeek-R1 on AI 100 Ultra via Cirrascale) for contextual decision-making in the cloud.
What we learned
The UNO Q is genuinely powerful but its dual-processor architecture (Linux + Zephyr RTOS communicating over an internal serial bridge) requires a fundamentally different mental model from traditional Arduino development. You're not writing a sketch — you're building a distributed system on a single board.
WebXR on Quest 3 is production-ready for this kind of application. A single HTML file with Three.js can deliver an immersive VR experience with head tracking, controller input, haptic feedback, and spatial audio — no Unity, no native app, no app store submission. The browser is the deployment platform.
Sensor fusion is harder than any individual sensor. Getting one thermistor to read a stable value is easy. Getting six sensors across two microcontrollers to fuse into a single heat stress score with debounced tier transitions, cumulative exposure tracking, and fall detection — while handling connection drops, missing data, and calibration differences — is where the real engineering happens.
Foundation models can replace custom classifiers when the right prompting infrastructure exists. Gemini's bounding box output with forced JSON response gives you object detection without training data, without labeling, without GPUs. The intelligence is in the prompt engineering and the pipeline that converts 2D boxes to 3D spatial markers.
What's next for ForeSight
Real-time wind data integration. Pull live weather data from weather.gov for the firefighter's GPS position and display wind speed, direction, and shift alerts on the HUD. Wind shifts cause 60% of entrapment fatalities — a 5-second API call could save a life.
NASA FIRMS integration. Overlay real satellite thermal hotspot data from NASA's Fire Information for Resource Management System onto the fire map alongside our simulation, grounding the prediction in actual observed fire positions.
Multi-firefighter mesh networking. Real-time position sharing between crew members using the Quest 3's built-in WiFi, with automatic buddy-check alerts if any team member's vitals go critical or they stop moving.
Custom fuel classification model. Train a lightweight segmentation model on ground-level wildfire vegetation imagery for on-device inference, eliminating the Gemini API latency and enabling continuous real-time fuel highlighting.
Partnership with fire agencies. The UCSD WIFIRE Lab is already working with DHS and the US Forest Service on wildfire intelligence platforms. ForeSight addresses the individual firefighter gap that satellite and drone-based systems can't fill — ground-level, real-time, heads-up.
Built With
- 3dprinting
- arduino
- biosensor
- gemini-api
- machine-learning
- python
- qualcomm-cloud
- vr
Log in or sign up for Devpost to join the conversation.