Inspiration
The project began when I built what I believe to be the world’s first system that allows users to control real-world devices simply by looking at them, which showed me how natural and intuitive spatial interfaces could be. That experience opened my eyes to a deeper opportunity: people want to capture and relive memories with the richness of actually being there, yet today’s tools reduce those moments to flat photos and videos. This inspired me to explore how spatial capture and sharing could become as simple and universal as taking a picture.
What it does
AirVis enables anyone to scan objects, rooms, and full environments using just their phone, reconstructing them with photogrammetry and Neural Radiance Fields so they can be viewed, explored, and shared in immersive spatial formats. Users can walk through captured environments, post them socially, and experience other people’s spaces in a way that feels deeply present and lifelike.
How we built it
AirVis was built end-to-end as a solo engineering effort across multiple platforms, including mobile and XR devices. I developed custom rendering engines in Metal, Vulkan, and OpenGL ES, implemented photogrammetry pipelines, and integrated Gaussian Splatting methods to generate realistic spatial reconstructions. On the backend, I built GPU-accelerated cloud workloads using Docker + CUDA on remote servers. The result is a full-stack capture-to-share system that works seamlessly across devices.
Challenges we ran into
One major challenge was optimizing the rendering engine for real-time viewing on mobile and XR hardware, which required deep GPU-level optimization. Another was building a consistent rendering pipeline across very different platforms with fragmented capabilities. Cloud compute posed its own obstacles. And finally, distribution remains a challenge: even a strong spatial product needs thoughtful, persistent community-building to reach users.
Accomplishments that we're proud of
We built and launched full multi-platform support, released a custom rendering engine, and implemented production-quality spatial reconstruction pipelines entirely in-house. AirVis has grown to over a thousand registered users with early paid subscribers. Achieving all of this as a solo founder is something I’m especially proud of.
What we learned
We learned that presence is the defining advantage of spatial media: users immediately understand its value once they experience it. We also learned how difficult it is to take cutting-edge research and translate it into a stable, performant mobile product. The process reinforced the importance of distribution, community engagement, and iteration based on real user feedback. Finally, we realized that cross-platform spatial experiences are not just nice to have — they are essential for building a true social ecosystem.
What's next for AirVis
Next, we’re building cross-platform spatial communication features so users can join each other inside captured environments and experience spaces together in real time. Improving reconstruction speed, and developing tools that let creators remix, annotate, or enhance their scenes. Long term, we aim to make AirVis the default place where people store, relive, and share their immersive memories — a spatial social platform that goes far beyond what’s possible with photos and videos.

Log in or sign up for Devpost to join the conversation.