Inspiration

Photogrammetry, a technology for creating 3 dimensional models from 2 dimensional images, increases the sense of presence and emotional connection to a memory. We wanted to explore the limits of current immersive technology using photometric models to maximize this sense of presence. To make this technology available to the widest audience possible, we also strove to create a simple to use process utilizing mobile phone technology.

What it does

We created a open source photogrammetry pipeline where you can capture images of a still object, character, or scene with your cellphone, upload it to a self-hosted server, and get a photogrammetric model.

We also created a VR simulated AR experience where you can experience the three scenes we documented during the Hackathon to explore what's possible with this photogrammtery-powered future. One of the scene is also charater rigged, animated, and VR-immersive as well.

How we built it

As reAnimators , we 5 creatives took together a multidimensional journey from coding to journalism, animation, software and UX design.

We used our smartphones to capture photogrammerty scenes and objects: taking videos, separating frames and sticking them together with Agisoft PhotoScan (Trial) into 3d assets. Then, we clean up the model with Blender, and process charater models with Mixamo for auto-rigging and animation. That’s how we were able to make photorealistic 3D content into the photogrammertry album XR experience.

We used Unity to render photogrammetry models on HTC Vive. We used SteamVR for headset connection and VRTK to streamline basic model interactions. Music: www.bensound.com

For the prototype photogrammetry pipeline, we used Colmap, OpenMVS, OpenMVG, Swift, ffmepg library, C#, Python and Agisoft PhotoScan.

Challenges we ran into

  • Vive Headset not working (needing a special adapter)
  • Difficult situation of caputring multiple figures in a wide range of lighting environments
  • Difficulting exporting Blender mesh animation to Unity
  • Mixamo auto-rigging tool is very picky for the models
  • Learning and using VRTK in one day
  • Large mesh objects processing and rendering
  • Unity doesn't have good team collaboration solutions

Accomplishments that we're proud of

  • We kept an open and honest collaborative environments with regular stand ups, sync ups, and feature demos.
  • We took many risks during the Hackathon: talking a long time for brainstorming to make sure we hear everyone, working with bleeding-edge technology and early version software, working with tools and workflows that we are not familiar with, and got a satisfying result at the end.
  • Tested the process of creating fully rigged and animated character out of just a smartphone video clip.
  • Created a new UX model of scene transition in VR (leaning into a scene)
  • Dancing on the table

What we learned

  • Sleeping is optional. For a great idea, people are willing to stay late, change schedules, etc.
  • Creating low-def proof-of-concept as early as possible would really speed up the process and lower the pressure level.
  • For Hackathon, quick manual solutions sometimes are better than fully developed toolkits / libraries.

What's next for Memories

  • Happy hours!
  • More exploreration in a mobile, open source solution for photogrammetry.

team lead mobile: (310) 849-7801 location: 4th floor, 450

Built With

Share this project:

Updates