Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image

Overview

Inverse Rendering for Complex Indoor Scenes:
Shape, Spatially-Varying Lighting and SVBRDF
From a Single Image
(Project page)

Zhengqin Li, Mohammad Shafiei, Ravi Ramamoorthi, Kalyan Sunkavalli, Manmohan Chandraker

Useful links:

Results on our new dataset

This is the official code release of paper Inverse Rendering for Complex Indoor Scenes: Shape, Spatially-Varying Lighting and SVBRDF From a Single Image. The original models were trained by extending the SUNCG dataset with an SVBRDF-mapping. Since SUNCG is not available now due to copyright issues, we are not able to release the original models. Instead, we rebuilt a new high-quality synthetic indoor scene dataset and trained our models on it. We will release the new dataset in the near future. The geometry configurations of the new dataset are based on ScanNet [1], which is a large-scale repository of 3D scans of real indoor scenes. Some example images can be found below. A video is at this link Insverse rendering results of the models trained on the new datasets are shown below. Scene editing applications results on real images are shown below, including results on object insertion and material editing. Models trained on the new dataset achieve comparable performances compared with our previous models. Quantitaive comparisons are listed below, where [Li20] represents our previous models trained on the extended SUNCG dataset.

Download the trained models

The trained models can be downloaded from the link. To test the models, please copy the models to the same directory as the code and run the commands as shown below.

Train and test on the synthetic dataset

To train the full models on the synthetic dataset, please run the commands

  • python trainBRDF.py --cuda --cascadeLevel 0 --dataRoot DATA: Train the first cascade of MGNet.
  • python trainLight.py --cuda --cascadeLevel 0 --dataRoot DATA: Train the first cascade of LightNet.
  • python trainBRDFBilateral.py --cuda --cascadeLevel 0 --dataRoot DATA: Train the bilateral solvers.
  • python outputBRDFLight.py --cuda --dataRoot DATA: Output the intermediate predictions, which will be used to train the second cascade.
  • python trainBRDF.py --cuda --cascadeLevel 1 --dataRoot DATA: Train the first cascade of MGNet.
  • python trainLight.py --cuda --cascadeLevel 1 --dataRoot DATA: Train the first cascade of LightNet.
  • python trainBRDFBilateral.py --cuda --cascadeLevel 1 --dataRoot DATA: Train the bilateral solvers.

To test the full models on the synthetic dataset, please run the commands

  • python testBRDFBilateral.py --cuda --dataRoot DATA: Test the BRDF and geometry predictions.
  • python testLight.py --cuda --cascadeLevel 0 --dataRoot DATA: Test the light predictions of the first cascade.
  • python testLight.py --cuda --cascadeLevel 1 --dataRoot DATA: Test the light predictions of the first cascade.

Train and test on IIW dataset for intrinsic decomposition

To train on the IIW dataset, please first train on the synthetic dataset and then run the commands:

  • python trainFineTuneIIW.py --cuda --dataRoot DATA --IIWRoot IIW: Fine-tune the network on the IIW dataset.

To test the network on the IIW dataset, please run the commands

  • bash runIIW.sh: Output the predictions for the IIW dataset.
  • python CompareWHDR.py: Compute the WHDR on the predictions.

Please fixing the data route in runIIW.sh and CompareWHDR.py.

Train and test on NYU dataset for geometry prediction

To train on the BYU dataset, please first train on the synthetic dataset and then run the commands:

  • python trainFineTuneNYU.py --cuda --dataRoot DATA --NYURoot NYU: Fine-tune the network on the NYU dataset.
  • python trainFineTuneNYU_casacde1.py --cuda --dataRoot DATA --NYURoot NYU: Fine-tune the network on the NYU dataset.

To test the network on the NYU dataset, please run the commands

  • bash runNYU.sh: Output the predictions for the NYU dataset.
  • python CompareNormal.py: Compute the normal error on the predictions.
  • python CompareDepth.py: Compute the depth error on the predictions.

Please remember fixing the data route in runNYU.sh, CompareNormal.py and CompareDepth.py.

Train and test on Garon19 [2] dataset for object insertion

There is no fine-tuning for the Garon19 dataset. To test the network, download the images from this link. And then run bash runReal20.sh. Please remember fixing the data route in runReal20.sh.

All object insertion results and comparisons with prior works can be found from this link. The code to run object insertion can be found from this link.

Differences from the original paper

The current implementation has 3 major differences from the original CVPR20 implementation.

  • In the new models, we do not use spherical Gaussian parameters generated from optimization for supervision. That is mainly because the optimization proceess is time consuming and we have not finished that process yet. We will update the code once it is done. The performance with spherical Gaussian supervision is expected to be better.
  • The resolution of the second cascade is changed from 480x640 to 240x320. We find that the networks can generate smoother results with smaller resolution.
  • We remove the light source segmentation mask as an input. It does not have a major impact on the final results.

Reference

[1] Dai, A., Chang, A. X., Savva, M., Halber, M., Funkhouser, T., & Nießner, M. (2017). Scannet: Richly-annotated 3d reconstructions of indoor scenes. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 5828-5839).

[2] Garon, M., Sunkavalli, K., Hadap, S., Carr, N., & Lalonde, J. F. (2019). Fast spatially-varying indoor lighting estimation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 6908-6917).

Optimus: the first large-scale pre-trained VAE language model

Optimus: the first pre-trained Big VAE language model This repository contains source code necessary to reproduce the results presented in the EMNLP 2

314 Dec 19, 2022
PyTorch Personal Trainer: My framework for deep learning experiments

Alex's PyTorch Personal Trainer (ptpt) (name subject to change) This repository contains my personal lightweight framework for deep learning projects

Alex McKinney 8 Jul 14, 2022
Surrogate- and Invariance-Boosted Contrastive Learning (SIB-CL)

Surrogate- and Invariance-Boosted Contrastive Learning (SIB-CL) This repository contains all source code used to generate the results in the article "

Charlotte Loh 3 Jul 23, 2022
Source code for paper: Knowledge Inheritance for Pre-trained Language Models

Knowledge-Inheritance Source code paper: Knowledge Inheritance for Pre-trained Language Models (preprint). The trained model parameters (in Fairseq fo

THUNLP 31 Nov 19, 2022
Evaluating AlexNet features at various depths

Linear Separability Evaluation This repo provides the scripts to test a learned AlexNet's feature representation performance at the five different con

Yuki M. Asano 32 Dec 30, 2022
Code for reproducing experiments in "Improved Training of Wasserstein GANs"

Improved Training of Wasserstein GANs Code for reproducing experiments in "Improved Training of Wasserstein GANs". Prerequisites Python, NumPy, Tensor

Ishaan Gulrajani 2.2k Jan 01, 2023
Implementation of Wasserstein adversarial attacks.

Stronger and Faster Wasserstein Adversarial Attacks Code for Stronger and Faster Wasserstein Adversarial Attacks, appeared in ICML 2020. This reposito

21 Oct 06, 2022
PyTorch Code for NeurIPS 2021 paper Anti-Backdoor Learning: Training Clean Models on Poisoned Data.

Anti-Backdoor Learning PyTorch Code for NeurIPS 2021 paper Anti-Backdoor Learning: Training Clean Models on Poisoned Data. The Anti-Backdoor Learning

Yige-Li 51 Dec 07, 2022
Graph-total-spanning-trees - A Python script to get total number of Spanning Trees in a Graph

Total number of Spanning Trees in a Graph This is a python script just written f

Mehdi I. 0 Jul 18, 2022
Tianshou - An elegant PyTorch deep reinforcement learning library.

Tianshou (天授) is a reinforcement learning platform based on pure PyTorch. Unlike existing reinforcement learning libraries, which are mainly based on

Tsinghua Machine Learning Group 5.5k Jan 05, 2023
Code repository for "Reducing Underflow in Mixed Precision Training by Gradient Scaling" presented at IJCAI '20

Reducing Underflow in Mixed Precision Training by Gradient Scaling This project implements the gradient scaling method to improve the performance of m

Ruizhe Zhao 5 Apr 14, 2022
On the Complementarity between Pre-Training and Back-Translation for Neural Machine Translation (Findings of EMNLP 2021))

PTvsBT On the Complementarity between Pre-Training and Back-Translation for Neural Machine Translation (Findings of EMNLP 2021) Citation Please cite a

Sunbow Liu 10 Nov 25, 2022
Realtime YOLO Monster Detection With Non Maximum Supression

Realtime-YOLO-Monster-Detection-With-Non-Maximum-Supression Table of Contents In

5 Oct 07, 2022
Supervised Classification from Text (P)

MSc-Thesis Module: Masters Research Thesis Language: Python Grade: 75 Title: An investigation of supervised classification of therapeutic process from

Matthew Laws 1 Nov 22, 2021
Single/multi view image(s) to voxel reconstruction using a recurrent neural network

3D-R2N2: 3D Recurrent Reconstruction Neural Network This repository contains the source codes for the paper Choy et al., 3D-R2N2: A Unified Approach f

Chris Choy 1.2k Dec 27, 2022
Jittor implementation of Recursive-NeRF: An Efficient and Dynamically Growing NeRF

Recursive-NeRF: An Efficient and Dynamically Growing NeRF This is a Jittor implementation of Recursive-NeRF: An Efficient and Dynamically Growing NeRF

33 Nov 30, 2022
PINN(s): Physics-Informed Neural Network(s) for von Karman vortex street

PINN(s): Physics-Informed Neural Network(s) for von Karman vortex street This is

ShotaDEGUCHI 2 Apr 18, 2022
Implementation of the paper "Shapley Explanation Networks"

Shapley Explanation Networks Implementation of the paper "Shapley Explanation Networks" at ICLR 2021. Note that this repo heavily uses the experimenta

68 Dec 27, 2022
Pytorch port of Google Research's LEAF Audio paper

leaf-audio-pytorch Pytorch port of Google Research's LEAF Audio paper published at ICLR 2021. This port is not completely finished, but the Leaf() fro

Dennis Fedorishin 80 Oct 31, 2022
Second Order Optimization and Curvature Estimation with K-FAC in JAX.

KFAC-JAX - Second Order Optimization with Approximate Curvature in JAX Installation | Quickstart | Documentation | Examples | Citing KFAC-JAX KFAC-JAX

DeepMind 90 Dec 22, 2022