Attentive Implicit Representation Networks (AIR-Nets)

Related tags

Deep LearningAIR-Nets
Overview

Attentive Implicit Representation Networks (AIR-Nets)

Preprint | Supplementary | Accepted at the International Conference on 3D Vision (3DV)

teaser.mov

This repository is the offical implementation of the paper

AIR-Nets: An Attention-Based Framework for Locally Conditioned Implicit Representations
by Simon Giebenhain and Bastian Goldluecke

Furthermore it provides a unified framework to execute Occupancy Networks (ONets), Convolutional Occuapncy Networks (ConvONets) and IF-Nets.

More qualitative results of our method can be found here.

Install

All experiments with AIR-Nets were run using CUDA version 11.2 and the official pytorch docker image nvcr.io/nvidia/pytorch:20.11-py3, as published by nvidia here. However, as the model is solely based on simple, common mechanisms, older CUDA and pytorch versions should also work. We provide the air-net_env.yaml file that holds all python requirements for this project. To conveniently install them automatically with anaconda you can use:

conda env create -f air-net_env.yml
conda activate air-net

AIR-Nets use farthest point sampling (FPS) to downsample the input. Run

pip install pointnet2_ops_lib/.

inorder to install the cuda implementation of FPS. Credits for this go to Erik Wijams's GitHub, from where the code was copied for convenience.

Running

python setup.py build_ext --inplace

installs the MISE algorithm (see http://www.cvlibs.net/publications/Mescheder2019CVPR.pdf) for extracting the reconstructed shapes as meshes.

When you want to run Convolutional Occupancy Networks you will have to install torch scatter using the official instructions found here.

Data Preparation

In our paper we mainly did experiments with the ShapeNet dataset, but preprocessed in two different falvours. The following describes the preprocessing for both alternatives. Note that they work individually, hence there is no need to prepare both. (When wanting to train with noise I would recommend the Onet data, since the supervision of the IF-Net data is concentrated so close to the boundary that the problem gets a bit ill-posed (adapting noise level and supervision distance can solve this, however).)

Preparing the data used in ONets and ConvONets

To parapre the ONet data clone their repository. Navigate to their repo cd occupancy_networks and run

bash scripts/download_data.sh

which will download and unpack the data automatically (consuming 73.4 GB). From the perspective of the main repository this will place the data in occupancy_networks/data/ShapeNet.

Prepating the IF-Net data

A small disclaimer: Preparing the data as in this tutorial will produce ~700GB of data. Deleting the .obj and .off files should reduce the load to 250GB. Storage demand can further be reduced by reducing the number of samples in data_processing/boundary_sampling.py. If storage is scarce the ONet data (see below) is an alternative.

This data preparation pipeline is mainly copied from IF-Nets, but slightly simplified.

Install a small library needed for the preprocessing using

cd data_processing/libmesh/
python setup.py build_ext --inplace
cd ../..

Furthermore you might need to install meshlab and xvfb using

apt-get update
apt-get install meshlab
apt-get install xvfb

To install gcc you can run sudo apt install build-essential.

To get started, download the preprocessed data by [Xu et. al. NeurIPS'19] from Google Drive into the shapenet folder.

Please note that some objects in this dataset were made watertight "incorrectly". More specifically some object parts are "double coated", such that the object boundary actually is composed of two boundaries which lie very close together. Therefor the "inside" of such objects lies in between these two boundaries, whereas the "true inside" would be classified as outside. This clearly can lead to ugly reconstructionsl, since representing such a thin "inside" is much trickier.

Then extract the files into shapenet\data using:

ls shapenet/*.tar.gz |xargs -n1 -i tar -xf {} -C shapenet/data/

Next, the input and supervision data is prepared. First, the data is converted to the .off-format and scaled (such that the longest edge of the bounding box for each object has unit length) using

python data_processing/convert_to_scaled_off.py

Then the point cloud input data can be created using

python data_processing/sample_surface.py

which samples 30.000 point uniformly distributed on the surface of the ground truth mesh. During training and testing the input point clouds will be randomly subsampled from these surface samples. The coordinates and corresponding ground truth occupancy values used for supervision during training can be generated using

python data_processing/boundary_sampling.py -sigma 0.1
python data_processing/boundary_sampling.py -sigma 0.01

where -sigma specifies the standard deviation of the normally distributed displacements added onto surface samples. Each call will generate 100.000 samples near the object's surface for which ground truth occupancy values are generated using the implicit waterproofing algorithm from IF-Nets supplementary. I have not experimented with any other values for sigma, and just copied the proposed values.

In order to remove meshes that could not be preprocessed correctly (should not be more than around 15 meshes) you should run

python data_processing/filter_corrupted.py -file 'surface_30000_samples.npy' -delete

Pay attantion with this command, i.e. the directory of all objects that don't contain the surface_30000_samples.npy file are deleted. If you chose to use a different number points, please make sure to adapt the command accordingly.

Finally the data should be located in shapenet/data.

Preparing the FAUST dataset

In order to download the FAUST dataset visit http://faust.is.tue.mpg.de and sign-up there. Once your account is approved you can download a .zip-file nameed MPI-FAUST.zip. Please place the extracted folder in the main folder, such that the data can be found in MPI-FAUST.

Training

For the training and model specification I use .yaml files. Their structure is explained in a separate markdown file here, which also has explanations which parameters can tune the model to become less memory intensive.

To train the model run

python train.py -exp_name YOUR_EXP_NAME -cfg_file configs/YOUR_CFG_FILE -data_type YOUR_DATA_TYPE

which stores results in experiments/YOUR_EXP_NAME. -cfg_file specifies the path to the config file. The content of the config file will then also be stored in experiments/config.yaml. YOUR_DATA_TYPE can either be 'ifnet', 'onet' or 'human' and dictates which dataset to use. Make sure to adapt the batch_size parameter in the config file accoridng to your GPU size.

Training progress is saved using tensorboard. You can visualize it by running

tensorboard --logdir experiments/YOUR_EXP_NAME/summary/ 

Note that checkpoints (including the optimizer) are saved after each epoch in the checkpoints folder. Therefore training can seamlessly be continued.

Generation

To generate reconstructions of the test set, run

python generate.py -exp_name YOUR_EXP_NAME -checkpoint CKPT_NUM -batch_points 400000 -method REC_METHOD 

where CKPT_NUM specifies the epoch to load the model from and -batch_points specifies how many points are batched together and may have top be adapted to your GPU size.
REC_METHOD can either be mise or mcubes. The former (and recommended) option uses the MISE algorithm for reconstruciton. The latter uses the vanilla marching cubes algorithm. For the MISE you can specifiy to additional paramters -mise_res (initial resolution, default is 64) and -mise_steps (number of refinement steps, defualt 2). (Note that we used 3 refinement steps for the main results of the dense models in the paper, just to be on the save side and not miss any details.) For the regular marching cubes algorithm you can use -mcubes_res to specify the resolution of the grid (default 128). Note that the cubic scaling quickly renders this really slow.

The command will place the generate meshes in the .OFFformat in experiments/YOUR_EXP_NAME/[email protected]_resxmise_steps/generation or experiments/YOUR_EXP_NAME/[email protected]_res/generation depending on method.

Evaluation

Running

python data_processing/evaluate.py -reconst -generation_path experiments/YOUR_EXP_NAME/evaluation_CKPT.../generation

will evaluate the generated meshes using the most common metrics: the volumetric IOU, the Chamfer distance (L1 and L2), the Normal consistency and F-score.

The results are summarized in experiment/YOUR_EXP_NAME/evaluation_CKPT.../evaluation_results.pkl by running

python data_processing/evaluate_gather.py -generation_path experiments/YOUR_EXP_NAME/evaluation_CKPT.../generation

Pretrained Models

Weights of trained models can be found here. For example create a folder experiments/PRETRAINED_MODEL, placing the corresponding config file in experiments/PRETRAINED_MODEL/configs.yaml and the weights in experiments/PRETRAINED_MODEL/checkpoints/ckpt.tar. Then run

python generate.py -exp_name PRETRAINED_MODEL -ckpt_name ckpt.tar -data_type DATA_TYPE

Contact

For questions, comments and to discuss ideas please contact Simon Giebenhain via simon.giebenhain (at] uni-konstanz {dot| de.

Citation

@inproceedings{giebenhain2021airnets,
title={AIR-Nets: An Attention-Based Framework for Locally Conditioned Implicit Representations},
author={Giebenhain, Simon and Goldluecke, Bastian},
booktitle={2021 International Conference on 3D Vision (3DV)},
year={2021},
organization={IEEE}
}

Acknowledgements

Large parts of this repository as well as the structure are copied from Julian Chibane's GitHub repository of the IF-Net paper. Please consider also citing their work, when using this repository!

This project also uses libraries form Occupancy Networks by Mescheder et al. CVPR'19 and from Convolutional Occupancy Networks by [Peng et al. ECCV'20].
We also want to thank DISN by [Xu et. al. NeurIPS'19], who provided their preprocessed ShapeNet data publicly. Please consider to cite them if you use our code.

License

Copyright (c) 2020 Julian Chibane, Max-Planck-Gesellschaft and
2021 Simon Giebenhain, Universität Konstanz

Please read carefully the following terms and conditions and any accompanying documentation before you download and/or use this software and associated documentation files (the "Software").

The authors hereby grant you a non-exclusive, non-transferable, free of charge right to copy, modify, merge, publish, distribute, and sublicense the Software for the sole purpose of performing non-commercial scientific research, non-commercial education, or non-commercial artistic projects.

Any other use, in particular any use for commercial purposes, is prohibited. This includes, without limitation, incorporation in a commercial product, use in a commercial service, or production of other artefacts for commercial purposes. For commercial inquiries, please see above contact information.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

You understand and agree that the authors are under no obligation to provide either maintenance services, update services, notices of latent defects, or corrections of defects with regard to the Software. The authors nevertheless reserve the right to update, modify, or discontinue the Software at any time.

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. You agree to cite the Implicit Functions in Feature Space for 3D Shape Reconstruction and Completion paper and the AIR-Nets: An Attention-Based Framework for Locally Conditioned Implicit Representations paper in documents and papers that report on research using this Software.

Multiview 3D object detection on MultiviewC dataset through moft3d.

Multiview Orthographic Feature Transformation for 3D Object Detection Multiview 3D object detection on MultiviewC dataset through moft3d. Introduction

Jiahao Ma 20 Dec 21, 2022
Offical implementation for "Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation".

Trash or Treasure? An Interactive Dual-Stream Strategy for Single Image Reflection Separation (NeurIPS 2021) by Qiming Hu, Xiaojie Guo. Dependencies P

Qiming Hu 31 Dec 20, 2022
Graduation Project

Gesture-Detection-and-Depth-Estimation This is my graduation project. (1) In this project, I use the YOLOv3 object detection model to detect gesture i

ChaosAT 1 Nov 23, 2021
Code for “ACE-HGNN: Adaptive Curvature ExplorationHyperbolic Graph Neural Network”

ACE-HGNN: Adaptive Curvature Exploration Hyperbolic Graph Neural Network This repository is the implementation of ACE-HGNN in PyTorch. Environment pyt

9 Nov 28, 2022
SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model

SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model Edresson Casanova, Christopher Shulby, Eren Gölge, Nicolas Michael Müller, Frede

Edresson Casanova 92 Dec 09, 2022
Streamlit tool to explore coco datasets

What is this This tool given a COCO annotations file and COCO predictions file will let you explore your dataset, visualize results and calculate impo

Jakub Cieslik 75 Dec 16, 2022
Sync2Gen Code for ICCV 2021 paper: Scene Synthesis via Uncertainty-Driven Attribute Synchronization

Sync2Gen Code for ICCV 2021 paper: Scene Synthesis via Uncertainty-Driven Attribute Synchronization 0. Environment Environment: python 3.6 and cuda 10

Haitao Yang 62 Dec 30, 2022
Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently

Peek-a-Boo: What (More) is Disguised in a Randomly Weighted Neural Network, and How to Find It Efficiently This repository is the official implementat

VITA 4 Dec 20, 2022
Metrics to evaluate quality and efficacy of synthetic datasets.

An Open Source Project from the Data to AI Lab, at MIT Metrics for Synthetic Data Generation Projects Website: https://sdv.dev Documentation: https://

The Synthetic Data Vault Project 129 Jan 03, 2023
Demo for the paper "Overlap-aware low-latency online speaker diarization based on end-to-end local segmentation"

Streaming speaker diarization Overlap-aware low-latency online speaker diarization based on end-to-end local segmentation by Juan Manuel Coria, Hervé

Juanma Coria 187 Jan 06, 2023
Human Detection - Pedestrian Detection using OpenCV Python

Pedestrian Detection using OpenCV Python Follow us on Instagram for Machine Lear

Hrishikesh Dutta 1 Jan 23, 2022
Python periodic table module

elemenpy Hello! elements.py is a small Python periodic table module that is used for calling certain information about an element. Installation Instal

Eric Cheng 2 Dec 27, 2021
Paddle implementation for "Cross-Lingual Word Embedding Refinement by ℓ1 Norm Optimisation" (NAACL 2021)

L1-Refinement Paddle implementation for "Cross-Lingual Word Embedding Refinement by ℓ1 Norm Optimisation" (NAACL 2021) 🙈 A more detailed readme is co

Lincedo Lab 4 Jun 09, 2021
(Python, R, C/C++) Isolation Forest and variations such as SCiForest and EIF, with some additions (outlier detection + similarity + NA imputation)

IsoTree Fast and multi-threaded implementation of Extended Isolation Forest, Fair-Cut Forest, SCiForest (a.k.a. Split-Criterion iForest), and regular

141 Dec 29, 2022
[ICCV'21] NEAT: Neural Attention Fields for End-to-End Autonomous Driving

NEAT: Neural Attention Fields for End-to-End Autonomous Driving Paper | Supplementary | Video | Poster | Blog This repository is for the ICCV 2021 pap

254 Jan 02, 2023
Imaginaire - NVIDIA's Deep Imagination Team's PyTorch Library

Imaginaire Docs | License | Installation | Model Zoo Imaginaire is a pytorch library that contains optimized implementation of several image and video

NVIDIA Research Projects 3.6k Dec 29, 2022
CCPD: a diverse and well-annotated dataset for license plate detection and recognition

CCPD (Chinese City Parking Dataset, ECCV) UPdate on 10/03/2019. CCPD Dataset is now updated. We are confident that images in subsets of CCPD is much m

detectRecog 1.8k Dec 30, 2022
Optimal Camera Position for a Practical Application of Gaze Estimation on Edge Devices,

Optimal Camera Position for a Practical Application of Gaze Estimation on Edge Devices, Linh Van Ma, Tin Trung Tran, Moongu Jeon, ICAIIC 2022 (The 4th

Linh 11 Oct 10, 2022
MoveNet Single Pose on DepthAI

MoveNet Single Pose tracking on DepthAI Running Google MoveNet Single Pose models on DepthAI hardware (OAK-1, OAK-D,...). A convolutional neural netwo

64 Dec 29, 2022
Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite and .pb from .tflite.

tflite2tensorflow Generate saved_model, tfjs, tf-trt, EdgeTPU, CoreML, quantized tflite and .pb from .tflite. 1. Supported Layers No. TFLite Layer TF

Katsuya Hyodo 214 Dec 29, 2022