The problem

People love to revisit their favourite memories. Unfortunately, scrolling through the camera roll or framing pictures simply lacks the immersive and cathartic characteristics these memories deserve.

What it does

Reminiscence turns an ordinary phone video into an explorable 3D memory: capture a scene on Meta Ray-Bans or iPhone, reconstruct it into a VR environment using Gaussian Splatting, import it into Unity, and walk through it in VR on a Meta Quest/Oculus headset. The cool part is that a flat clip becomes a spatial asset you can revisit, place in a scene, and experience at human scale.

How we built it

The project connects a Swift capture app, a FastAPI backend, COLMAP, FastGS, and Unity/OpenXR into one local VR environment reconstruction pipeline.

Accomplishments that we're proud of

Video -> VR environment in <1 min total by optimizing COLMAP and FastGS parameters.

What we learned/challenges

The entire tech stack was new to all team members. Learning Unity and VR while trying to construct a mobile app on Swift that connects to state-of-the-art ML models was difficult but rewarding. Now, each team member has come out of the hackathon with a new tools added to their skillset!

What's next for Reminiscence

In the future Reminiscence aims to add video VR support, where users can replay scenes in the 3D environment instead of just viewing static images.

Built With

Share this project:

Updates