Inspiration
Modern first responders, soldiers clearing a building, firefighters searching for victims, search and rescue teams sweeping dense terrain, operate with a fundamental disadvantage: no shared picture of the space they're moving through. Each person knows only what they've personally seen. Radios relay words; nothing relays geometry. We wanted to answer a simple question: what if every operator's iPhone was a node in a living, shared map? The phone already tracks its own position with centimeter-level accuracy. The missing piece was the glue, something to stream that awareness to teammates in real time, fuse everyone's data into a single scene, and let you replay and reconstruct a mission after the fact. WallHax is that glue.
What it does
WallHax is a multi-device AR collaboration system with three parts working together: Live operations on iOS: Each device tracks its own position and streams it to every other device on the network. Teammates appear as color-coded avatars in your AR view. Operators can drop labeled pins, Threat, Victim, Breach, Hydrant, that instantly show up on every other device. Detected walls and floors are shared too, so everyone sees the same emerging floor plan. Three operational modes: Military, Search & Rescue, and Firefighter each have a tailored UI and pin vocabulary. The military layout feels like a tactical terminal; SAR is warm and approachable; firefighter is bold and high-contrast. Same system underneath, completely different feel on the surface. Mission reconstruction: After an operation, scans from all devices are merged into a unified dataset. That dataset feeds a web viewer that overlays the team's recorded camera paths on top of a photorealistic 3D capture of the space, so you can scrub through the timeline, review who was where, and see the full picture of how the mission unfolded.
How we built it
We built across four distinct layers in parallel: a SwiftUI iOS app using ARKit for spatial tracking and RealityKit for rendering teammates, a Python relay server running on a Mac that forwards position data between devices and draws live trajectories, a mapping pipeline that processes scan data from multiple phones into a single merged dataset, and a React + Three.js web viewer that plays back the recorded paths over a photorealistic 3D capture of the environment. The whole system communicates over a local network with no cloud dependency, designed to work in basements, buildings, and anywhere connectivity is unreliable.
Challenges we ran into
The biggest challenge was taking the raw map data recorded through SLAM and IMU, essentially a stream of sensor readings as someone walks a space, and turning it into a clean, navigable 3D Gaussian splat reconstruction. Getting the capture, the poses, and the photorealistic render to all agree on the shape of the room was a significant technical and visual hurdle. The other major challenge was the core "wall hack" effect itself: rendering a teammate's avatar through walls. Making a 3D stick figure appear to bleed through solid geometry in a way that feels intentional and readable, rather than just broken, required carefully controlling how the model is drawn relative to the scene's depth information. Getting it to look right, stay performant, and actually convey useful positional information about someone on the other side of a wall took a lot of iteration. On top of that, getting multiple iPhones to share the same coordinate frame, so that everyone's position means the same thing to every device, was a foundational problem that everything else depended on.
Accomplishments that we're proud of
- A fully working real-time relay where teammates genuinely appear in each other's AR views with low latency on a local network.
- Three use-case modes that feel purpose-built rather than reskinned, the UX decisions go all the way down to font choice, pin labels, and interaction style.
- A complete end-to-end pipeline from phone → scan export → merged dataset → photorealistic web reconstruction, with no manual annotation required. Shipping a coherent system across iOS, a Python backend, and a web frontend in a single hackathon sprint.
What we learned
Rendering is only half the battle, how and when something renders matters just as much. The wall-hack effect sounds simple in concept, but making it feel like a feature rather than a glitch taught us a lot about depth, occlusion, and the assumptions baked into 3D rendering pipelines. We also learned that the gap between "recorded data" and "usable 3D scene" is wide. SLAM and IMU give you a pose trail; turning that into something photorealistic that a viewer can actually navigate takes a meaningful amount of processing and alignment work on top. The pipeline connecting capture to reconstruction was one of the most involved parts of the whole project.
What's next for WallHax
- Firefighter heat maps: overlaying a live heat map of the space on top of the shared AR view, so commanders and operators can see at a glance where temperatures are spiking and where it's still safe to move.
- Thermal vision: integrating thermal camera data directly into the AR view so operators can see through smoke and near-zero visibility conditions. For firefighters especially, this could be the difference between finding a victim and missing them entirely.
- Cross-platform support: bringing WallHax beyond iPhone to Android, AR headsets, and ruggedized field devices, so any operator regardless of hardware can join the same mission.
- Long-range radio communication: replacing the WiFi dependency with a radio-based mesh network that works over long distances and punches through thick concrete walls. This would let WallHax function in exactly the environments where it's needed most, deep inside buildings, underground, or across sprawling sites, with no existing infrastructure required.
Log in or sign up for Devpost to join the conversation.