ICCV2021 Oral SA-ConvONet: Sign-Agnostic Optimization of Convolutional Occupancy Networks

Overview

Sign-Agnostic Convolutional Occupancy Networks

Paper | Supplementary | Video | Teaser Video | Project Page

This repository contains the implementation of the paper:

SA-ConvONet: Sign-Agnostic Optimization of Convolutional Occupancy Network ICCV 2021 (Oral)

If you find our code or paper useful, please consider citing

@inproceedings{tang2021sign,
  title={SA-ConvONet: Sign-Agnostic Optimization of Convolutional Occupancy Networks},
  author={Tang, Jiapeng and Lei, Jiabao and Xu, Dan and Ma, Feiying and Jia, Kui and Zhang, Lei},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  year={2021}
}

Contact Jiapeng Tang for questions, comments and reporting bugs.

Installation

First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called sa_conet using

conda env create -f environment.yaml
conda activate sa_conet

Note: you might need to install torch-scatter mannually following the official instruction:

pip install torch-scatter==2.0.4 -f https://pytorch-geometric.com/whl/torch-1.4.0+cu101.html

Next, compile the extension modules. You can do this via

python setup.py build_ext --inplace

Demo

First, run the script to get the demo data:

bash scripts/download_demo_data.sh

Reconstruct Large-Scale Matterport3D Scene

You can now quickly test our code on the real-world scene shown in the teaser. To this end, simply run:

python generate_optim_largescene.py configs/pointcloud_crop/demo_matterport.yaml

This script should create a folder out/demo_matterport/generation where the output meshes and input point cloud are stored.

Note: This experiment corresponds to our fully convolutional model, which we train only on the small crops from our synthetic room dataset. This model can be directly applied to large-scale real-world scenes with real units and generate meshes in a sliding-window manner, as shown in the teaser. More details can be found in section D.1 of our supplementary material. For training, you can use the script pointcloud_crop/room_grid64.yaml.

Reconstruct Synthetic Indoor Scene

You can also test on our synthetic room dataset by running:

python generate_optim_scene.py configs/pointcloud/demo_syn_room.yaml

Reconstruct ShapeNet Object

You can also test on the ShapeNet dataset by running:

python generate_optim_object.py configs/pointcloud/demo_shapenet.yaml --this file needs to be created.

Dataset

To evaluate a pretrained model or train a new model from scratch, you have to obtain the respective dataset. In this paper, we consider 4 different datasets:

ShapeNet

You can download the dataset (73.4 GB) by running the script from Occupancy Networks. After, you should have the dataset in data/ShapeNet folder.

Synthetic Indoor Scene Dataset

For scene-level reconstruction, we use a synthetic dataset of 5000 scenes with multiple objects from ShapeNet (chair, sofa, lamp, cabinet, table). There are also ground planes and randomly sampled walls.

You can download the preprocessed data (144 GB) by ConvONet using

bash scripts/download_data.sh

This script should download and unpack the data automatically into the data/synthetic_room_dataset folder.
Note: The point-wise semantic labels are also provided in the dataset, which might be useful.

Alternatively, you can also preprocess the dataset yourself. To this end, you can:

  • download the ShapeNet dataset as described above.
  • check scripts/dataset_synthetic_room/build_dataset.py, modify the path and run the code.

Matterport3D

Download Matterport3D dataset from the official website. And then, use scripts/dataset_matterport/build_dataset.py to preprocess one of your favorite scenes. Put the processed data into data/Matterport3D_processed folder.

ScanNet

Download ScanNet v2 data from the official ScanNet website. Then, you can preprocess data with: scripts/dataset_scannet/build_dataset.py and put into data/ScanNet folder.
Note: Currently, the preprocess script normalizes ScanNet data to a unit cube for the comparison shown in the paper, but you can easily adapt the code to produce data with real-world metric. You can then use our fully convolutional model to run evaluation in a sliding-window manner.

Usage

When you have installed all binary dependencies and obtained the preprocessed data, you are ready to perform sign-agnostic optimzation, run the pre-trained models, and train new models from scratch.

Mesh Generation for ConvOnet

To generate meshes using a pre-trained model, use

python generate.py CONFIG.yaml

where you replace CONFIG.yaml with the correct config file.

Use pre-trained models The easiest way is to use a pre-trained model. You can do this by using one of the config files under the pretrained folders.

For example, for 3D reconstruction from noisy point cloud with our 3-plane model on the synthetic room dataset, you can simply run:

python generate.py configs/pointcloud/pretrained/room_3plane.yaml

The script will automatically download the pretrained model and run the mesh generation. You can find the outputs in the out/.../generation_pretrained folders

Note that the config files are only for generation, not for training new models: when these configs are used for training, the model will be trained from scratch, but during inference our code will still use the pretrained model.

The provided following pretrained models are:

pointcloud/shapenet_3plane.pt
pointcloud/room_grid64.pt
pointcloud_crop/room_grid64.pt

Sign-Agnostic Optimization of ConvONet

Before the sign-agnostic, test-time optimization on the Matterport3D dataset, we firstly run the below script to preprocess the testset.

python scripts/dataset_matterport/make_cropscene_dataset.py --in_folder $in_folder --out_folder $out_folder --do_norm

Please specify the in_folder and out_folder.

To perform sign-agnostic, test-time optimization for more accurate surface mesh generation using a pretrained model, use

python generate_optim_object.py configs/pointcloud/test_optim/shapenet_3plane.yaml
python generate_optim_scene.py configs/pointcloud/test_optim/room_grid64.yaml
python generate_optim_largescene.py configs/pointcloud_crop/test_optim/room_grid64.yaml

Evaluation

For evaluation of the models, we provide the scripts eval_meshes.py and eval_meshes_optim.py. You can run it using:

python eval_meshes.py CONFIG.yaml
python eval_meshes_optim.py CONFIG.yaml

The scripts takes the meshes generated in the previous step and evaluates them using a standardized protocol. The output will be written to .pkl/.csv files in the corresponding generation folder which can be processed using pandas.

Training

Finally, to pretrain a new network from scratch, run:

python train.py CONFIG.yaml

For available training options, please take a look at configs/default.yaml.

Acknowledgements

Most of the code is borrowed from ConvONet. We thank Songyou Peng for his great works.

In this project we use both Resnet and Self-attention layer for cat, dog and flower classification.

cdf_att_classification classes = {0: 'cat', 1: 'dog', 2: 'flower'} In this project we use both Resnet and Self-attention layer for cdf-Classification.

3 Nov 23, 2022
A curated list of long-tailed recognition resources.

Awesome Long-tailed Recognition A curated list of long-tailed recognition and related resources. Please feel free to pull requests or open an issue to

Zhiwei ZHANG 542 Jan 01, 2023
Lux AI environment interface for RLlib multi-agents

Lux AI interface to RLlib MultiAgentsEnv For Lux AI Season 1 Kaggle competition. LuxAI repo RLlib-multiagents docs Kaggle environments repo Please let

Jaime 12 Nov 07, 2022
MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation

MHFormer: Multi-Hypothesis Transformer for 3D Human Pose Estimation This repo is the official implementation of "MHFormer: Multi-Hypothesis Transforme

Vegetabird 281 Jan 07, 2023
An implementation of the AlphaZero algorithm for Gomoku (also called Gobang or Five in a Row)

AlphaZero-Gomoku This is an implementation of the AlphaZero algorithm for playing the simple board game Gomoku (also called Gobang or Five in a Row) f

Junxiao Song 2.8k Dec 26, 2022
Neural Point-Based Graphics

Neural Point-Based Graphics Project   Video   Paper Neural Point-Based Graphics Kara-Ali Aliev1 Artem Sevastopolsky1,2 Maria Kolos1,2 Dmitry Ulyanov3

Ali Aliev 252 Dec 13, 2022
The official PyTorch implementation of the paper: *Xili Dai, Xiaojun Yuan, Haigang Gong, Yi Ma. "Fully Convolutional Line Parsing." *.

F-Clip — Fully Convolutional Line Parsing This repository contains the official PyTorch implementation of the paper: *Xili Dai, Xiaojun Yuan, Haigang

Xili Dai 115 Dec 28, 2022
Anatomy of Matplotlib -- tutorial developed for the SciPy conference

Introduction This tutorial is a complete re-imagining of how one should teach users the matplotlib library. Hopefully, this tutorial may serve as insp

Matplotlib Developers 1.1k Dec 29, 2022
Bayes-Newton—A Gaussian process library in JAX, with a unifying view of approximate Bayesian inference as variants of Newton's algorithm.

Bayes-Newton Bayes-Newton is a library for approximate inference in Gaussian processes (GPs) in JAX (with objax), built and actively maintained by Wil

AaltoML 165 Nov 27, 2022
Effect of Deep Transfer and Multi task Learning on Sperm Abnormality Detection

Effect of Deep Transfer and Multi task Learning on Sperm Abnormality Detection Introduction This repository includes codes and models of "Effect of De

Amir Abbasi 5 Sep 05, 2022
Chainer Implementation of Fully Convolutional Networks. (Training code to reproduce the original result is available.)

fcn - Fully Convolutional Networks Chainer implementation of Fully Convolutional Networks. Installation pip install fcn Inference Inference is done as

Kentaro Wada 218 Oct 27, 2022
The pytorch implementation of the paper "text-guided neural image inpainting" at MM'2020

TDANet: Text-Guided Neural Image Inpainting, MM'2020 (Oral) MM | ArXiv This repository implements the paper "Text-Guided Neural Image Inpainting" by L

LisaiZhang 75 Dec 22, 2022
CNNs for Sentence Classification in PyTorch

Introduction This is the implementation of Kim's Convolutional Neural Networks for Sentence Classification paper in PyTorch. Kim's implementation of t

Shawn Ng 956 Dec 19, 2022
Implementation of SETR model, Original paper: Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.

SETR - Pytorch Since the original paper (Rethinking Semantic Segmentation from a Sequence-to-Sequence Perspective with Transformers.) has no official

zhaohu xing 112 Dec 16, 2022
Using pretrained GROVER to extract the atomic fingerprints from molecule

Extracting atomic fingerprints from molecules using pretrained Graph Neural Network models (GROVER).

Xuan Vu Nguyen 1 Jan 28, 2022
Original code for "Zero-Shot Domain Adaptation with a Physics Prior"

Zero-Shot Domain Adaptation with a Physics Prior [arXiv] [sup. material] - ICCV 2021 Oral paper, by Attila Lengyel, Sourav Garg, Michael Milford and J

Attila Lengyel 40 Dec 21, 2022
[NeurIPS 2021] "G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of Teacher Discriminators"

G-PATE This is the official code base for our NeurIPS 2021 paper: "G-PATE: Scalable Differentially Private Data Generator via Private Aggregation of T

AI Secure 14 Oct 12, 2022
Code for the paper Relation Prediction as an Auxiliary Training Objective for Improving Multi-Relational Graph Representations (AKBC 2021).

Relation Prediction as an Auxiliary Training Objective for Knowledge Base Completion This repo provides the code for the paper Relation Prediction as

Facebook Research 85 Jan 02, 2023
A PyTorch Implementation of "SINE: Scalable Incomplete Network Embedding" (ICDM 2018).

Scalable Incomplete Network Embedding ⠀⠀ A PyTorch implementation of Scalable Incomplete Network Embedding (ICDM 2018). Abstract Attributed network em

Benedek Rozemberczki 69 Sep 22, 2022
Notebooks for my "Deep Learning with TensorFlow 2 and Keras" course

Deep Learning with TensorFlow 2 and Keras – Notebooks This project accompanies my Deep Learning with TensorFlow 2 and Keras trainings. It contains the

Aurélien Geron 1.9k Dec 15, 2022