Inspiration
Sometimes, problems can seem far away. Yet, the impact of our consumption and production patterns on the environment is no longer a distant concern—it reaches each and every one of us. More than a third of the United Nations Sustainable Development Goals deal with the impact of human interaction with our ecosystems – 6 of 17 – and we believe this is a great reason to raise awareness and encourage people to take action through as many channels as possible. Today, media like TV, internet and VR allow us to witness the sight of a melting glacier, plunge into polluted water, or be trapped in the middle of a raging wildfire. However, many of us have desensitized ourselves to topics like global warming and pollution. By leveraging mixed reality and bringing scenes of crisis into living rooms, we want people to feel the consequences of climate change alongside the beauty of nature. We want to inspire people to build empathy towards our nature. Fostering change alone is hard; hence, we want to take this experience into a shared space where people can connect with nature and each other.
What it does
By simply touching the walls around you, step into three scenarios – pollution, rising sea levels, and raging wildfires – and invite friends and family to take action, together. Concrete actions, such as changes in everyday behavior and donation opportunities, are presented at key points. When you touch a wall in any room or urban setting, it creates a rift to another world. You can peer through that portal and see the first scenery: the serene beauty of the underwater world. Turtles and marine creatures swim by as though you could reach out and touch them. But then, something strange happens. The waste and pollution of the ocean start to invade your room, and you feel overwhelmed. You can collect the trash simply by touching it with your hand. Each time you eliminate a plastic bottle, information about our climate problem pops up.

The portal then shifts to the Arctic, where icebergs approach you and break apart while seagulls flee through the room. As the pieces of icebergs fall right in front of your feet, you find yourself stepping back in fear.

Finally, the portal takes you to a raging wildfire, and you see the flames engulfing your room.

How we built it
We found a great team to tackle a big challenge. It was crucial to have interaction design, development, 3D creation, and development go hand in hand with very tight feedback circles. Once we found a core interaction that we liked, we iterated live on Quest 3 to find the right pacing and timing. The scenes were assembled in Unity, and exported to the web with Needle Engine. This allowed us to iterate super fast, right on device, which allowed us to find performance issues and optimization opportunities early, for example for adjusting models and textures. Through spatial understanding and passthrough, Meta’s presence platform, combined with our shared social experience, achieves a high level of connection. But as new technologies can also be barriers to entry for many, we designed our hack from the ground up for the open web. Built for Web-XR on devices like Meta Quest 3, our key message is also delivered on smartphones and desktops.
Accessible cross-platform support on the open web
By leveraging modern web technologies, we can make In Arm’s Reach accessible to everyone with a browser. The best experience is provided on Quest 3 in AR with Room Setup completed and hand tracking. Mobile phones, desktop screens, and immersive VR headsets are also supported.


Information Bubbles
The water scene demonstrates a concept we coined “information bubbles”: reaching out to the pieces of trash makes them crumble and disappear, and in their spot we provide imagery of the amount of waste produced daily. The images were taken at the RealityHack event.
Dynamic switching between VR and AR
The ice scene demonstrates a dynamic switch between AR and VR: once the pieces have settled, the user can step closer to fully immersive into the scenery. One step back and they’re in mixed reality again, seeing both the virtual iceberg and their living room.
Timing and Storytelling

Using Unity’s Timeline, we were able to sequence events for each of the scenes. This is especially apparent for the Ice scene: once the user has touched the wall for the first time, a mix of predefined motion and secondary animation brings the iceberg into the user’s living room and raises the sea level. The seagulls, fleeing from the event, soar above the user’s head, and then stay around in your space.
Similarly, the water scene is timed to have one of the turtles float into the user’s space a while after starting the interaction, and the fire scene makes the flames engulf the user. Additionally, a few more trees grow from the floor to increase immersion.
Secondary Animation
Instead of hand-animating each and every object, we used natural noise patterns to animate creatures, seaweed, trees and pieces of ice. In some cases, this secondary animation is combined with hand animation as outlined in the storytelling section above. Particle systems add an additional sense of space in the fire and water scenes.
3D Modelling and Pipeline
All models were created as glTF files in Blender and imported into Unity. This retains all material and animation data, ready to be used interactively. The iceberg animation has been baked from a physics simulation in Blender and exported as glTF as well. Audio files were created as mp3s.
All created files undergo the automatic optimizations built into Needle, which ensure that meshes are compressed with Draco (for smaller file size) and textures with KTX2-ETC1S (for super efficient memory usage).
Mixed Reality Recordings
Since we added multi-user capabilities to our app early on, we were able to leverage that to record footage right from Quest in Mixed Reality. By aligning the spaces of two or more users, we can record exactly what the other person does.
Rendering Overview
A main feature of In Arm's Reach is the ability to look behind your walls and bring a virtual environment into your room. This is enabled by WebXR features that are backed by Meta's Presence Platform.
We're rendering in a very specific order:
- Quest OS Room Setup provides spatial planes (walls, floors) to WebXR experiences as part of the
plane-trackingWebXR feature. - When starting an
immersive-arWebXR session, features like hand tracking and plane tracking are requested. - Planes are set up as Occluders, which render depth but not color.
- This makes them invisible, but other objects will clip against them (you can't see behind them).
- When a hand or controller is close to a wall,
- the Environmental Scenes are placed at the first touch point
- at a regular distance, new animated Depth Cutters are placed.
- Depth Cutters "punch" holes into the depth buffer that has already been seeded by the tracked planes.
- Portal "fringes" are rendered after occluders but before depth cutters, so that there are no fringes inside of portals.
- After this, all other scene geometry is rendered.
In a future update, we'd like to incorporate Quest Depth Sensing into rendering this pipeline, which provides real-time occlusion, for even greater immersion.
Multi-User Support
To start a session with other people:
- Ensure all users have a valid Room Setup.
- All users should recenter the Meta Quest on a spot that’s roughly 3 metres away from your wall. If users are in the same physical location, they should recenter on the same spot.
- Append
?room=ABCDto the URL and refresh the page (some random room ID). - Send the full URL to others so they can join the same session.
- If there was a previous session in that room, click on “Menu” and “Reset” in the upper right corner of the page. This is a workaround to clear outdated synchronized data and would not be needed for a production application.
- Have all people enter the AR session.
- Once everyone has touched a wall at least once synchronization is in effect. There may be a red overlay visible when a user hasn’t touched a wall at least once.
Challenges we ran into
A conceptual challenge we ran into was deciding which scenarios to use for our hack, and what feeling to convey to our users. We wanted to avoid showing only negative scenarios, and show that taking action can have a clear impact. We also didn’t want to make it seem too easy to “solve climate change”, so we avoided overly gamified interactions, instead focusing on minimal interactions and calls to action. One design challenge was finding a good balance between abstraction and realism. We wanted to avoid falling into the “uncanny valley”, but still provide enough realism to understand that these challenges are real. An example of this challenge we’re not super happy with yet is the wildfire – it conveys the right impression, but we may want to spend more time on it in the future.
Accomplishments that we're proud of
We’re very proud that everyone who tried the prototype so far loved the feeling of gently touching a wall to reveal a world so close to our hearts. This physical connection adds an emotional component that has often been lacking from other immersive experiences dealing with environmental challenges. We think we managed our time pretty well – we had enough time to iterate on each of the scenes and get the feeling right. Audio got into the app very early instead of being an afterthought. Our approach to portal rendering feels pretty novel – typically, stencil buffer rendering is used to provide “portals” into various scenarios, but our depth buffer-based approach allows for greater creative flexibility when it comes to objects exiting from portals into your space.
What we learned
It was great to learn more about the UN’s sustainable development goals and challenges during the first day of the hack. This broadened our understanding of what challenges we can meaningfully tackle. This project has taught us that everything is connected—urban actions and nature have a symbiotic relationship. By using mixed reality, we've found a powerful way to bring nature closer to urban spaces and give people the experience of seeing and feeling this connection firsthand. This project has solidified our intuition that an immersive experience, when done right, can create a stronger bond with the environment, leading to a greater awareness of the need for sustainable urban development.
What's next for In Arm's Reach
This is merely the tip of the iceberg (no pun intended). Through partnerships with various organizations, we aim to demonstrate the tangible impact that proactive measures can have on our climate. Our vision involves enhancing engagement by seamlessly integrating additional information and data into enjoyable interactions. Furthermore, we plan to expand the project's reach by weaving in more narratives surrounding climate change. Additionally, exploring educational collaborations with schools and universities could broaden our audience and deepen the project's impact. We envision evolving into a comprehensive platform that not only raises awareness but also equips people with the knowledge and motivation to take meaningful actions against climate change.



Log in or sign up for Devpost to join the conversation.