Raw Point Cloud
Refined Point Cloud
Forward Inferenced Image
Raw Kinect Data
A machine learning based 3d scanner using traditional camera hardware | Made with <3 from Mike and Tom
3D Depth extraction from 2D images using a convolutional autoencoder neural network. Custom pipeline to generate point cloud from forward inferenced depth map, point cloud meshing, and texturing.
There are a TON of 2D cameras out there, but very little 3D scanners due to specialize and expensive hardware. Our project's aim was to leverage Machine Learning to infer 3D depth maps which can then be used to generate point clouds and 3D meshes.
How we built it
Our pipeline has a few different parts. First we used Keras and Tensorflow to generate a neural network based on convolutional auto-encoding. We trained our net using around 11 GBs of depth and 2D data generated from a Kinect v2. Once trained, our network takes an unknown image, breaks it up into patches and then computes an associated depth map. We do some preprocessing with PIL to remove the patching grid using an in-painting algorithm and mask. After we have cleaned up our image we use our own UV mapping algoritm to project the points back into 3D space. WHEW. If you're still with me, we then ran a Ball Pivot mesh construction algorithm and then some smoothing and texturing filters to get what you see in our final output.
Challenges we ran into
- Getting high quality training data
- Getting diverse training data
- Training a neural network
- Computational geometry can be hard :/
Made for HackISU Spring 2017 ^-^