This repository contains the code for the CVPR 2020 paper "Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision"

Overview

Differentiable Volumetric Rendering

Paper | Supplementary | Spotlight Video | Blog Entry | Presentation | Interactive Slides | Project Page

This repository contains the code for the paper Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision.

You can find detailed usage instructions for training your own models and using pre-trained models below.

If you find our code or paper useful, please consider citing

@inproceedings{DVR,
    title = {Differentiable Volumetric Rendering: Learning Implicit 3D Representations without 3D Supervision},
    author = {Niemeyer, Michael and Mescheder, Lars and Oechsle, Michael and Geiger, Andreas},
    booktitle = {Proc. IEEE Conf. on Computer Vision and Pattern Recognition (CVPR)},
    year = {2020}
}

Installation

First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called dvr using

conda env create -f environment.yaml
conda activate dvr

Next, compile the extension modules. You can do this via

python setup.py build_ext --inplace

Demo

You can now test our code on the provided input images in the demo folder. To this end, start the generation process for one of the config files in the configs/demo folder. For example, simply run

python generate.py configs/demo/demo_combined.yaml

This script should create a folder out/demo/demo_combined where the output meshes are stored. The script will copy the inputs into the generation/inputs folder and creates the meshes in the generation/meshes folder. Moreover, the script creates a generation/vis folder where both inputs and outputs are copied together.

Dataset

Download Datasets

To evaluate a pre-trained model or train a new model from scratch, you have to obtain the respective dataset. We use three different datasets in the DVR project:

  1. ShapeNet for 2.5D supervised models (using the Choy et. al. renderings as input and our renderings as supervision)
  2. ShapeNet for 2D supervised models (using the Kato et. al. renderings)
  3. A subset of the DTU multi-view dataset

You can download our preprocessed data using

bash scripts/download_data.sh

and following the instructions. The sizes of the datasets are 114GB (a), 34GB (b), and 0.5GB (c).

This script should download and unpack the data automatically into the data folder.

Data Convention

Please have a look at the FAQ for details regarding the type of camera matrices we use.

Usage

When you have installed all binary dependencies and obtained the preprocessed data, you are ready to run our pre-trained models and train new models from scratch.

Generation

To generate meshes using a trained model, use

python generate.py CONFIG.yaml

where you replace CONFIG.yaml with the correct config file.

The easiest way is to use a pre-trained model. You can do this by using one of the config files which are indicated with _pretrained.yaml.

For example, for our 2.5D supervised single-view reconstruction model run

python generate.py configs/single_view_reconstruction/multi_view_supervision/ours_depth_pretrained.yaml

or for our multi-view reconstruction from RGB images and sparse depth maps for the birds object run

python generate.py configs/multi_view_reconstruction/birds/ours_depth_mvs_pretrained.yaml

Our script will automatically download the model checkpoints and run the generation. You can find the outputs in the out/.../pretrained folders.

Please note that the config files *_pretrained.yaml are only for generation, not for training new models: when these configs are used for training, the model will be trained from scratch, but during inference our code will still use the pre-trained model.

Generation From Your Own Single Images

Similar to our demo, you can easily generate 3D meshes from your own single images. To this end, create a folder which contains your own images (e.g. media/my_images). Next, you can reuse the config file configs/demo/demo_combined.yaml and just adjust the data - path and training - out_dir arguments to your needs. For example, you can set the config file to

inherit_from: configs/single_view_reconstruction/multi_view_supervision/ours_combined_pretrained.yaml
data:
  dataset_name: images
  path: media/my_images
training:
  out_dir:  out/my_3d_models

to generate 3D models for the images in media/my_images. The models will be saved to out/my_3d_models. Similar to before, to start the generation process, run

python generate.py configs/demo/demo_combined.yaml 

Note: You can only expect our model to provide reasonable results on data which is similar to what it was trained on (white background, single object, etc.).

Evaluation

For evaluation of the models, we provide the script eval_meshes.py. You can run it using

python eval_meshes.py CONFIG.yaml

The script takes the meshes generated in the previous step and evaluates them using a standardized protocol. The output will be written to .pkl/.csv files in the corresponding generation folder which can be processed using pandas.

Training

Finally, to train a new network from scratch, run

python train.py CONFIG.yaml

where you replace CONFIG.yaml with the name of the configuration file you want to use.

You can monitor on http://localhost:6006 the training process using tensorboard:

cd OUTPUT_DIR
tensorboard --logdir ./logs

where you replace OUTPUT_DIR with the respective output directory.

For available training options, please take a look at configs/default.yaml.

Futher Information

More Work on Implicit Representations

If you like the DVR project, please check out other works on implicit representions from our group:

Other Relevant Works

Also check out other exciting works on inferring implicit representations without 3D supervision:

atmaCup #11 の Public 4th / Pricvate 5th Solution のリポジトリです。

#11 atmaCup 2021-07-09 ~ 2020-07-21 に行われた #11 [初心者歓迎! / 画像編] atmaCup のリポジトリです。結果は Public 4th / Private 5th でした。 フレームワークは PyTorch で、実装は pytorch-image-m

Tawara 12 Apr 07, 2022
Quantization library for PyTorch. Support low-precision and mixed-precision quantization, with hardware implementation through TVM.

HAWQ: Hessian AWare Quantization HAWQ is an advanced quantization library written for PyTorch. HAWQ enables low-precision and mixed-precision uniform

Zhen Dong 293 Dec 30, 2022
根据midi文件演奏“风物之诗琴”的脚本 "Windsong Lyre" auto play

Genshin-lyre-auto-play 简体中文 | English 简介 根据midi文件演奏“风物之诗琴”的脚本。由Python驱动,在此承诺, ⚠️ 项目内绝不含任何能够引起安全问题的代码。 前排提示:所有键盘在动但是原神没反应的都是因为没有管理员权限,双击run.bat或者以管理员模式

御坂17032号 386 Jan 01, 2023
MDMM - Learning multi-domain multi-modality I2I translation

Multi-Domain Multi-Modality I2I translation Pytorch implementation of multi-modality I2I translation for multi-domains. The project is an extension to

Hsin-Ying Lee 107 Nov 04, 2022
Keras like implementation of Deep Learning architectures from scratch using numpy.

Mini-Keras Keras like implementation of Deep Learning architectures from scratch using numpy. How to contribute? The project contains implementations

MANU S PILLAI 5 Oct 10, 2021
A fast, distributed, high performance gradient boosting (GBT, GBDT, GBRT, GBM or MART) framework based on decision tree algorithms, used for ranking, classification and many other machine learning tasks.

Light Gradient Boosting Machine LightGBM is a gradient boosting framework that uses tree based learning algorithms. It is designed to be distributed a

Microsoft 14.5k Jan 08, 2023
Pytorch implementation of forward and inverse Haar Wavelets 2D

Pytorch implementation of forward and inverse Haar Wavelets 2D

Sergei Belousov 9 Oct 30, 2022
QRec: A Python Framework for quick implementation of recommender systems (TensorFlow Based)

Introduction QRec is a Python framework for recommender systems (Supported by Python 3.7.4 and Tensorflow 1.14+) in which a number of influential and

Yu 1.4k Dec 30, 2022
Generating Digital Painting Lighting Effects via RGB-space Geometry (SIGGRAPH2020/TOG2020)

Project PaintingLight PaintingLight is a project conducted by the Style2Paints team, aimed at finding a method to manipulate the illumination in digit

651 Dec 29, 2022
Fully Automatic Page Turning on Real Scores

Fully Automatic Page Turning on Real Scores This repository contains the corresponding code for our extended abstract Henkel F., Schwaiger S. and Widm

Florian Henkel 7 Jan 02, 2022
PyTorch code for our paper "Gated Multiple Feedback Network for Image Super-Resolution" (BMVC2019)

Gated Multiple Feedback Network for Image Super-Resolution This repository contains the PyTorch implementation for the proposed GMFN [arXiv]. The fram

Qilei Li 66 Nov 03, 2022
Code for "Learning the Best Pooling Strategy for Visual Semantic Embedding", CVPR 2021

Learning the Best Pooling Strategy for Visual Semantic Embedding Official PyTorch implementation of the paper Learning the Best Pooling Strategy for V

Jiacheng Chen 106 Jan 06, 2023
Code for Understanding Pooling in Graph Neural Networks

Select, Reduce, Connect This repository contains the code used for the experiments of: "Understanding Pooling in Graph Neural Networks" Setup Install

Daniele Grattarola 37 Dec 13, 2022
R3Det based on mmdet 2.19.0

R3Det: Refined Single-Stage Detector with Feature Refinement for Rotating Object Installation # install mmdetection first if you haven't installed it

SJTU-Thinklab-Det 38 Dec 15, 2022
SPT_LSA_ViT - Implementation for Visual Transformer for Small-size Datasets

Vision Transformer for Small-Size Datasets Seung Hoon Lee and Seunghyun Lee and Byung Cheol Song | Paper Inha University Abstract Recently, the Vision

Lee SeungHoon 87 Jan 01, 2023
A simple rest api serving a deep learning model that classifies human gender based on their faces. (vgg16 transfare learning)

this is a simple rest api serving a deep learning model that classifies human gender based on their faces. (vgg16 transfare learning)

crispengari 5 Dec 09, 2021
Official code repository for the work: "The Implicit Values of A Good Hand Shake: Handheld Multi-Frame Neural Depth Refinement"

Handheld Multi-Frame Neural Depth Refinement This is the official code repository for the work: The Implicit Values of A Good Hand Shake: Handheld Mul

55 Dec 14, 2022
Self-Supervised CNN-GCN Autoencoder

GCNDepth Self-Supervised CNN-GCN Autoencoder GCNDepth: Self-supervised monocular depth estimation based on graph convolutional network To be published

53 Dec 14, 2022
The Face Mask recognition system uses AI technology to detect the person with or without a mask.

Face Mask Detection Face Mask Detection system built with OpenCV, Keras/TensorFlow using Deep Learning and Computer Vision concepts in order to detect

Rohan Kasabe 4 Apr 05, 2022
CRNN With PyTorch

CRNN-PyTorch Implementation of https://arxiv.org/abs/1507.05717

Vadim 4 Sep 01, 2022