Official PyTorch code of DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context Graph and Relation-based Optimization (ICCV 2021 Oral).

Overview

DeepPanoContext (DPC) [Project Page (with interactive results)][Paper]

DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context Graph and Relation-based Optimization

Cheng Zhang, Zhaopeng Cui, Cai Chen, Shuaicheng Liu, Bing Zeng, Hujun Bao, Yinda Zhang

teaser pipeline

Introduction

This repo contains data generation, data preprocessing, training, testing, evaluation, visualization code of our ICCV 2021 paper.

Install

Install necessary tools and create conda environment (needs to install anaconda if not available):

sudo apt install xvfb ninja-build freeglut3-dev libglew-dev meshlab
conda env create -f environment.yaml
conda activate Pano3D
python -m pip install detectron2 -f https://dl.fbaipublicfiles.com/detectron2/wheels/cu101/torch1.7/index.html
python project.py build
  • When running python project.py build, the script will run external/build_gaps.sh which requires password for sudo privilege for apt-get install. Please make sure you are running with a user with sudo privilege. If not, please reach your administrator for installation of these libraries and comment out the corresponding lines then run python project.py build.
  • If you encounter /usr/bin/ld: cannot find -lGL problem when building GAPS, please follow this issue.

Since the dataloader loads large number of variables, before training, please follow this to raise the open file descriptor limits of your system. For example, to permanently change the setting, edit /etc/security/limits.conf with a text editor and add the following lines:

*         hard    nofile      500000
*         soft    nofile      500000
root      hard    nofile      500000
root      soft    nofile      500000

Demo

Download the pretrained checkpoints of detector, layout estimation network, and other modules. Then unzip the folder out into the root directory of current project. Since the given checkpoints are trained with current version of our code, which is a refactored version, the results are slightly better than those reported in our paper.

Please run the following command to predict on the given example in demo/input with our full model:

CUDA_VISIBLE_DEVICES=0 WANDB_MODE=dryrun python main.py configs/pano3d_igibson.yaml --model.scene_gcn.relation_adjust True --mode test

Or run without relation optimization:

CUDA_VISIBLE_DEVICES=0 WANDB_MODE=dryrun python main.py configs/pano3d_igibson.yaml --mode test

The results will be saved to out/pano3d/<demo_id>. If nothing goes wrong, you should get the following results:

rgb.png visual.png
det3d.jpg render.png

Data preparation

Our data is rendered with iGibson. Here, we follow their Installation guide to download iGibson dataset, then render and preprocess the data with our code.

  1. Download iGibson dataset with:

    python -m gibson2.utils.assets_utils --download_ig_dataset
  2. Render panorama with:

    python -m utils.render_igibson_scenes --renders 10 --random_yaw --random_obj --horizon_lo --world_lo

    The rendered dataset should be in data/igibson/.

  3. Make models watertight and render/crop single object image:

    python -m utils.preprocess_igibson_obj --skip_mgn

    The processed results should be in data/igibson_obj/.

  4. (Optional) Before proceeding to the training steps, you could visualize dataset ground-truth of data/igibson/ with:

    python -m utils.visualize_igibson

    Results ('visual.png' and 'render.png') should be saved to folder of each camera like data/igibson/Pomaria_0_int/00007.

Training and Testing

Preparation

  1. We use the pretrained weights of Implicit3DUnderstanding for fine-tuning Bdb3d Estimation Network (BEN) and LIEN+LDIF. Please download the pretrained checkpoint and unzip it into out/total3d/20110611514267/.

  2. We use wandb for logging and visualizing experiments. You can follow their quickstart guide to sign up for a free account and login on your machine with wandb login. The training and testing results will be uploaded to your project "deeppanocontext".

  3. Hint: The <XXX_id> in the commands bellow needs to be replaced with the XXX_id trained in the previous steps.

  4. Hint: In the steps bellow, when training or testing with main.py, you can override yaml configurations with command line parameter:

    CUDA_VISIBLE_DEVICES=0 python main.py configs/layout_estimation_igibson.yaml --train.epochs 100

    This might be helpful when debugging or tuning hyper-parameters.

First Stage

2D Detector

  1. Train 2D detector (Mask RCNN) with:

    CUDA_VISIBLE_DEVICES=0 python train_detector.py

    The trained weights will be saved to out/detector/detector_mask_rcnn

  2. (Optional) When training 2D detector, you could visualize the training process with:

    tensorboard --logdir out/detector/detector_mask_rcnn --bind_all --port 6006
  3. (Optional) Evaluate with:

    CUDA_VISIBLE_DEVICES=0 python test_detector.py

    The results will be saved to out/detector/detector_mask_rcnn/evaluation_{train/test}. Alternatively, you can visualize the prediction results on test set with:

     CUDA_VISIBLE_DEVICES=0 python test_detector.py --visualize --split test

    The visualization will be saved to the folder where the model weights file is.

  4. (Optional) Visualize BFoV detection results:

    CUDA_VISIBLE_DEVICES=0 python main.py configs/detector_2d_igibson.yaml --mode qtest --log.vis_step 1

    The visualization will be saved to out/detector/<detector_test_id>

Layout Estimation

Train layout estimation network (HorizonNet) with:

CUDA_VISIBLE_DEVICES=0 python main.py configs/layout_estimation_igibson.yaml

The checkpoint and visualization results will be saved to out/layout_estimation/<layout_estimation_id>/model_best.pth

Save First Stage Outputs

  1. Save predictions of 2D detector and LEN as dateset for stage 2 training:

    CUDA_VISIBLE_DEVICES=0 WANDB_MODE=dryrun python main.py configs/first_stage_igibson.yaml --mode qtest --weight out/layout_estimation/<layout_estimation_id>/model_best.pth

    The first stage outputs should be saved to data/igibson_stage1

  2. (Optional) Visualize stage 1 dataset with:

    python -m utils.visualize_igibson --dataset data/igibson_stage1 --skip_render

Second Stage

Object Reconstruction

Train object reconstruction network (LIEN+LDIF) with:

CUDA_VISIBLE_DEVICES=0 python main.py configs/ldif_igibson.yaml

The checkpoint and visualization results will be saved to out/ldif/<ldif_id>.

Bdb3D Estimation

Train bdb3d estimation network (BEN) with:

CUDA_VISIBLE_DEVICES=0 python main.py configs/bdb3d_estimation_igibson.yaml

The checkpoint and visualization results will be saved to out/bdb3d_estimation/<bdb3d_estimation_id>.

Relation SGCN

  1. Train Relation SGCN without relation branch:

    CUDA_VISIBLE_DEVICES=0 python main.py configs/relation_scene_gcn_igibson.yaml --model.scene_gcn.output_relation False --model.scene_gcn.loss BaseLoss --weight out/bdb3d_estimation/<bdb3d_estimation_id>/model_best.pth out/ldif/<ldif_id>/model_best.pth

    The checkpoint and visualization results will be saved to out/relation_scene_gcn/<relation_sgcn_wo_rel_id>.

  2. Train Relation SGCN with relation branch:

    CUDA_VISIBLE_DEVICES=0 python main.py configs/relation_scene_gcn_igibson.yaml --weight out/relation_scene_gcn/<relation_sgcn_wo_rel_id>/model_best.pth --train.epochs 20 

    The checkpoint and visualization results will be saved to out/relation_scene_gcn/<relation_sgcn_id>.

  3. Fine-tune Relation SGCN end-to-end with relation optimization:

    CUDA_VISIBLE_DEVICES=0 python main.py configs/relation_scene_gcn_igibson.yaml --weight out/relation_scene_gcn/<relation_sgcn_id>/model_best.pth --model.scene_gcn.relation_adjust True --train.batch_size 1 --val.batch_size 1 --device.num_workers 2 --train.freeze shape_encoder shape_decoder --model.scene_gcn.loss_weights.bdb3d_proj 1.0 --model.scene_gcn.optimize_steps 20 --train.epochs 10

    The checkpoint and visualization results will be saved to out/relation_scene_gcn/<relation_sgcn_ro_id>.

Test Full Model

Run:

CUDA_VISIBLE_DEVICES=0 python main.py configs/relation_scene_gcn_igibson.yaml --weight out/relation_scene_gcn/<relation_sgcn_ro_id>/model_best.pth --log.path out/relation_scene_gcn --resume False --finetune True --model.scene_gcn.relation_adjust True --mode qtest --model.scene_gcn.optimize_steps 100

The visualization results will be saved to out/relation_scene_gcn/<relation_sgcn_ro_test_id>.

Citation

If you find our work and code helpful, please consider cite:

@misc{zhang2021deeppanocontext,
      title={DeepPanoContext: Panoramic 3D Scene Understanding with Holistic Scene Context Graph and Relation-based Optimization}, 
      author={Cheng Zhang and Zhaopeng Cui and Cai Chen and Shuaicheng Liu and Bing Zeng and Hujun Bao and Yinda Zhang},
      year={2021},
      eprint={2108.10743},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

@InProceedings{Zhang_2021_CVPR,
    author    = {Zhang, Cheng and Cui, Zhaopeng and Zhang, Yinda and Zeng, Bing and Pollefeys, Marc and Liu, Shuaicheng},
    title     = {Holistic 3D Scene Understanding From a Single Image With Implicit Representation},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month     = {June},
    year      = {2021},
    pages     = {8833-8842}
}

We thank the following great works:

  • Total3DUnderstanding for their well-structured code. We construct our network based on their well-structured code.
  • Coop for their dataset. We used their processed dataset with 2D detector prediction.
  • LDIF for their novel representation method. We ported their LDIF decoder from Tensorflow to PyTorch.
  • Graph R-CNN for their scene graph design. We adopted their GCN implemention to construct our SGCN.
  • Occupancy Networks for their modified version of mesh-fusion pipeline.

If you find them helpful, please cite:

@InProceedings{Nie_2020_CVPR,
author = {Nie, Yinyu and Han, Xiaoguang and Guo, Shihui and Zheng, Yujian and Chang, Jian and Zhang, Jian Jun},
title = {Total3DUnderstanding: Joint Layout, Object Pose and Mesh Reconstruction for Indoor Scenes From a Single Image},
booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
month = {June},
year = {2020}
}
@inproceedings{huang2018cooperative,
  title={Cooperative Holistic Scene Understanding: Unifying 3D Object, Layout, and Camera Pose Estimation},
  author={Huang, Siyuan and Qi, Siyuan and Xiao, Yinxue and Zhu, Yixin and Wu, Ying Nian and Zhu, Song-Chun},
  booktitle={Advances in Neural Information Processing Systems},
  pages={206--217},
  year={2018}
}	
@inproceedings{genova2020local,
    title={Local Deep Implicit Functions for 3D Shape},
    author={Genova, Kyle and Cole, Forrester and Sud, Avneesh and Sarna, Aaron and Funkhouser, Thomas},
    booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
    pages={4857--4866},
    year={2020}
}
@inproceedings{yang2018graph,
    title={Graph r-cnn for scene graph generation},
    author={Yang, Jianwei and Lu, Jiasen and Lee, Stefan and Batra, Dhruv and Parikh, Devi},
    booktitle={Proceedings of the European Conference on Computer Vision (ECCV)},
    pages={670--685},
    year={2018}
}
@inproceedings{mescheder2019occupancy,
  title={Occupancy networks: Learning 3d reconstruction in function space},
  author={Mescheder, Lars and Oechsle, Michael and Niemeyer, Michael and Nowozin, Sebastian and Geiger, Andreas},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={4460--4470},
  year={2019}
}
Owner
Cheng Zhang
Cheng Zhang of UESTC 电子科技大学 通信学院 章程
Cheng Zhang
LSTM and QRNN Language Model Toolkit for PyTorch

LSTM and QRNN Language Model Toolkit This repository contains the code used for two Salesforce Research papers: Regularizing and Optimizing LSTM Langu

Salesforce 1.9k Jan 08, 2023
thundernet ncnn

MMDetection_Lite 基于mmdetection 实现一些轻量级检测模型,安装方式和mmdeteciton相同 voc0712 voc 0712训练 voc2007测试 coco预训练 thundernet_voc_shufflenetv2_1.5 input shape mAP 320

DayBreak 39 Dec 05, 2022
Linear Variational State Space Filters

Linear Variational State Space Filters To set up the environment, use the provided scripts in the docker/ folder to build and run the codebase inside

0 Dec 13, 2021
Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020)

GraspNet Baseline Baseline model for "GraspNet-1Billion: A Large-Scale Benchmark for General Object Grasping" (CVPR 2020). [paper] [dataset] [API] [do

GraspNet 209 Dec 29, 2022
Official implementation of "Accelerating Reinforcement Learning with Learned Skill Priors", Pertsch et al., CoRL 2020

Accelerating Reinforcement Learning with Learned Skill Priors [Project Website] [Paper] Karl Pertsch1, Youngwoon Lee1, Joseph Lim1 1CLVR Lab, Universi

Cognitive Learning for Vision and Robotics (CLVR) lab @ USC 134 Dec 06, 2022
KAPAO is an efficient multi-person human pose estimation model that detects keypoints and poses as objects and fuses the detections to predict human poses.

KAPAO (Keypoints and Poses as Objects) KAPAO is an efficient single-stage multi-person human pose estimation model that models keypoints and poses as

Will McNally 664 Dec 30, 2022
Pytorch implementation code for [Neural Architecture Search for Spiking Neural Networks]

Neural Architecture Search for Spiking Neural Networks Pytorch implementation code for [Neural Architecture Search for Spiking Neural Networks] (https

Intelligent Computing Lab at Yale University 28 Nov 18, 2022
[ICCV 2021] Official Pytorch implementation for Discriminative Region-based Multi-Label Zero-Shot Learning SOTA results on NUS-WIDE and OpenImages

Discriminative Region-based Multi-Label Zero-Shot Learning (ICCV 2021) [arXiv][Project page coming soon] Sanath Narayan*, Akshita Gupta*, Salman Kh

Akshita Gupta 54 Nov 21, 2022
Real-ESRGAN aims at developing Practical Algorithms for General Image Restoration.

Real-ESRGAN Colab Demo for Real-ESRGAN . Portable Windows executable file. You can find more information here. Real-ESRGAN aims at developing Practica

Xintao 17.2k Jan 02, 2023
Cweqgen - The CW Equation Generator

The CW Equation Generator The cweqgen (pronouced like "Queck-Jen") package provi

2 Jan 15, 2022
Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models

Patch-Rotation(PatchRot) Patch Rotation: A Self-Supervised Auxiliary Task for Robustness and Accuracy of Supervised Models Submitted to Neurips2021 To

4 Jul 12, 2021
Implementation of Squeezenet in pytorch, pretrained models on Cifar 10 data to come

Pytorch Squeeznet Pytorch implementation of Squeezenet model as described in https://arxiv.org/abs/1602.07360 on cifar-10 Data. The definition of Sque

gaurav pathak 86 Oct 28, 2022
Neural implicit reconstruction experiments for the Vector Neuron paper

Neural Implicit Reconstruction with Vector Neurons This repository contains code for the neural implicit reconstruction experiments in the paper Vecto

Congyue Deng 35 Jan 02, 2023
Python implementation of a live deep learning based age/gender/expression recognizer

TUT live age estimator Python implementation of a live deep learning based age/gender/smile/celebrity twin recognizer. All components use convolutiona

Heikki Huttunen 80 Nov 21, 2022
NeuralForecast is a Python library for time series forecasting with deep learning models

NeuralForecast is a Python library for time series forecasting with deep learning models. It includes benchmark datasets, data-loading utilities, evaluation functions, statistical tests, univariate m

Nixtla 1.1k Jan 03, 2023
Generate fine-tuning samples & Fine-tuning the model & Generate samples by transferring Note On

UPMT Generate fine-tuning samples & Fine-tuning the model & Generate samples by transferring Note On See main.py as an example: from model import PopM

7 Sep 01, 2022
Python Actor concurrency library

Thespian Actor Library This library provides the framework of an Actor model for use by applications implementing Actors. Thespian Site with Documenta

Kevin Quick 177 Dec 11, 2022
Multi-Scale Aligned Distillation for Low-Resolution Detection (CVPR2021)

MSAD Multi-Scale Aligned Distillation for Low-Resolution Detection Lu Qi*, Jason Kuen*, Jiuxiang Gu, Zhe Lin, Yi Wang, Yukang Chen, Yanwei Li, Jiaya J

DV Lab 115 Dec 23, 2022
PROJECT - Az Residential Real Estate Analysis

AZ RESIDENTIAL REAL ESTATE ANALYSIS -Decided on libraries to import. Includes pa

2 Jul 05, 2022
Code for Mining the Benefits of Two-stage and One-stage HOI Detection

Status: Archive (code is provided as-is, no updates expected) PPO-EWMA [Paper] This is code for training agents using PPO-EWMA and PPG-EWMA, introduce

OpenAI 33 Dec 15, 2022