Point2Mesh: Turning Point Clouds into Meshes with Neural Networks

This is a reimplementation of Point2Mesh: A Self-Prior for Deformable Meshes by Rana Hanocka, Gal Metzer, Raja Giryes and Daniel Cohen-Or. It uses convolutional neural networks (adapted for meshes) to shrink-wrap meshes around point clouds. Watch our video or take a look at our poster to learn more!

Project Proposal

https://docs.google.com/document/d/1a4a45bPWBrISA9RBdINrpJnNSSWzRVHE_Zvs0IRUZ98/edit?usp=sharing

Final Write-up

https://docs.google.com/document/d/1Q7iEyYqiZG9tiCgu9R-4uabK6el1sa9CSbcZdlTMW0g/edit?usp=sharing

Built With

Share this project:

Updates

posted an update

Introduction When real-world objects are converted into 3D models, they usually go through a laser scanning process that produces 3D point clouds. However, point clouds are a sparse representation that isn’t useful in applications like visual effects and video games, which use 3D meshes. This paper’s objective is to create a watertight mesh reconstruction from 3D point clouds, improving on previous optimization-based approaches. First, our group was interested in graphics and has experience using 3D modeling and CAD tools. Second, this paper seemed like a good opportunity to implement something fairly challenging while diversifying our deep learning through an examination of the original PyTorch code and learning about the reimplementation process in TensorFlow. Finally, the model produces compelling visual results that include DINOSAURS!

Challenges: What has been the hardest part of the project you’ve encountered so far? Understanding the authors’ original code has been the hardest part of the project so far. The authors’ code is barely commented and does a lot of complex PyTorch/NumPy indexing and padding operations without explaining them. We found that we were able to reimplement much of what the authors did more simply. We have also found and fixed minor bugs (e.g. every triangle’s area being off by a factor of 2).

Insights: Are there any concrete results you can show at this point? The components we’ve completed so far: Pooling and convolution layers Beam-gap and bidirectional chamfer distance losses Convex hull formation Mesh representation, checker, and collapse operation Sampling method to generate representative point cloud from a mesh

Because of the significant overhead of creating the mesh representation and layers, we haven’t been able to run the network yet. However, we have written unit tests for each component.

Plan: Are you on track with your project? What do you need to dedicate more time to? What are you thinking of changing, if anything? Yes, we are on track. Understanding, building and testing the aforementioned components took significant effort. We feel that we’re in a good position to assemble and begin testing our network.

Log in or sign up for Devpost to join the conversation.