Share this project:

Updates

posted an update

Introduction

Problem statement: “Learning texture (and shape) representations: Learn high-quality textures of 3D data to enable learning of probabilistic generative models for texturing unseen 3D shapes.”

Recent works on 3D shape representations (like AtlasNet, Mesh CNN, and DeepSDF) and texture mappings on 2D images (like image-to-image translation conditional adversarial networks) inspired us to build a model that learns texture information (appearance) and draws relations between shapes and appearance in 3D space. In our background research, we found a related work, Texture Fields, from 2019 that learns a parameterized continuous function for representing texture information in 3D space. Our project is going to be based on this work and train a network that produces more meaningful latent representations for appearance and shapes. Such representation, when randomly sampled at test time, should produce more physically plausible texture(s) with higher quality. In our project, we plan to incorporate more state-of-the-art techniques (like DeepSDF and SIREN) and test on efficiency and output quality for different models using training data from 3D future and ShapeNet.

Challenges

The base of our project is the Texture Fields code and architecture, and although we plan on making various structural changes with the intent of improved performance, our first step was making sure we were able to run their code as it is. Specifically, we wanted to make sure each of us individually had the code set up and running somewhere we had access. While the authors of Texture Fields made their code publicly available, getting the code running locally has been a challenge and more of a time-consuming process than previously anticipated.

One upcoming challenge is to figure out how we store and share the data. The training data provided by Texture Fields contains only the car category which already has 33GB. The file size is cumbersome for our local machines, and clearly, we will need to shift to GCP to train our model. As no member of our group has prior experience with GCP, we anticipate a learning curve as we determine the best way to run our project.

Insights

The demo provided by Texture Fields is able to run on our local machines and we have gotten both the conditional and unconditional generator to work. The output contains rendered images from different viewpoints. However, the current model cannot reproduce finer details with high quality. This is an area that we want to further improve on. We also realize that the paper’s implementation does not include sampling point clouds from the input textures, so this will need to be incorporated into our data-preprocessing.

Plan

Understanding the Texture Field code has taken longer than we previously anticipated, and we discovered that we have more preprocessing steps necessary than previously anticipated. The Texture Field architecture first preprocesses each mesh into a point cloud, but their publicly available code does not include the preprocessing files. As such, we now realize that we will have to devote time preprocessing the 3D Future data so that it is compatible with Texture Field. We are a little bit behind where we had hoped to be at this point, but have a plan for moving forward. Finding the preprocessing scripts from ShapeNet will help speed this up.

There are two main tasks for our team at the current stage, preprocessing and building the model. One group will preprocess the 3D meshes into point clouds, depth maps, rendered images to be later fed into training, and the other group will dive into the source code (Texture Fields, DeepSDF, SIREN), understand the detailed structures, and transport the model to GCP. Then we will switch out the shape and image encoders to test if other newer methods can learn more meaningful parameters. Finally, we will attempt to train our model on a high-quality dataset of 3D textures to explore how the model performs with higher-frequency input.

Log in or sign up for Devpost to join the conversation.