Intro
What if you can turn anything you see into a 3D model that is ready for upload to the metaverse?
We present a highly-accessible and scalable consumer end to end photogrammetry solution enabling just that.
We begin with input from photography only, processed serverside, to output a 3D model. Because this does not require any special sensors, this solution scales to being accessible by over 6.6 billion smartphone devices in the world. (Statista)
App
We created a best in class clientside capture software that runs on any iOS, Android or Windows phone device.
Note that the app is specifically designed to guide the user to take photos in a hemisphere around the object, because our 3D reconstruction model is based on fitting around a hemisphere.
Serverside
Once the images are uploaded, we run a PyTorch3D model, optimized for Habana Gaudi DL1 instances on EC2, that fits textures to a 3D mesh.
And, we can augment it back to reality. Or, export the 3D model to the metaverse in fbx, glb, usdz and other formats for them metaverse and beyond.
Addendum
As metaverses grow to become the future of computing, robust 3D content creation grows to reach higher demand than there is supply.
The added degree of freedom for 3D content creation creates extra complexity that both traditional 2D art creation software and 3D modeling software have not been able to solve to meet the efficiency required by demand.
Moreover, consumer LIDAR devices such as the iPhone 12 Pro Max and higher, do not produce 3D scan content that is as high fidelity as a photogrammetry solution, such as the one we presented.

Log in or sign up for Devpost to join the conversation.