RGB-D Local Implicit Function for Depth Completion of Transparent Objects

Overview

RGB-D Local Implicit Function for Depth Completion of Transparent Objects

[Project Page] [Paper]

Overview

This repository maintains the official implementation of our CVPR 2021 paper:

RGB-D Local Implicit Function for Depth Completion of Transparent Objects

By Luyang Zhu, Arsalan Mousavian, Yu Xiang, Hammad Mazhar, Jozef van Eenbergen, Shoubhik Debnath, Dieter Fox

Requirements

The code has been tested on the following system:

  • Ubuntu 18.04
  • Nvidia GPU (4 Tesla V100 32GB GPUs) and CUDA 10.2
  • python 3.7
  • pytorch 1.6.0

Installation

Docker (Recommended)

We provide a Dockerfile for building a container to run our code. More details about GPU accelerated Docker containers can be found here.

Local Installation

We recommend creating a new conda environment for a clean installation of the dependencies.

conda create --name lidf python=3.7
conda activate lidf

Make sure CUDA 10.2 is your default cuda. If your CUDA 10.2 is installed in /usr/local/cuda-10.2, add the following lines to your ~/.bashrc and run source ~/.bashrc:

export PATH=$PATH:/usr/local/cuda-10.2/bin
export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/usr/local/cuda-10.2/lib64
export CPATH=$CPATH:/usr/local/cuda-10.2/include

Install libopenexr-dev

sudo apt-get update && sudo apt-get install libopenexr-dev

Install dependencies, we use ${REPO_ROOT_DIR} to represent the working directory of this repo.

cd ${REPO_ROOT_DIR}
conda install pytorch==1.6.0 torchvision==0.7.0 cudatoolkit=10.2 -c pytorch
pip install -r requirements.txt

Dataset Preparation

ClearGrasp Dataset

ClearGrasp can be downloaded at their official website (Both training and testing dataset are needed). After you download zip files and unzip them on your local machine, the folder structure should be like

${DATASET_ROOT_DIR}
├── cleargrasp
│   ├── cleargrasp-dataset-train
│   ├── cleargrasp-dataset-test-val

Omniverse Object Dataset

Omniverse Object Dataset can be downloaded here. After you download zip files and unzip them on your local machine, the folder structure should be like

${DATASET_ROOT_DIR}
├── omniverse
│   ├── train
│   │	├── 20200904
│   │	├── 20200910

Soft link dataset

cd ${REPO_ROOT_DIR}
ln -s ${DATASET_ROOT_DIR}/cleargrasp datasets/cleargrasp
ln -s ${DATASET_ROOT_DIR}/omniverse datasets/omniverse

Testing

We provide pretrained checkpoints at the Google Drive. After you download the file, please unzip and copy the checkpoints folder under ${REPO_ROOT_DIR}.

Change the following line in ${REPO_ROOT_DIR}/src/experiments/implicit_depth/run.sh:

# To test first stage model (LIDF), use the following line
cfg_paths=experiments/implicit_depth/test_lidf.yaml
# To test second stage model (refinement model), use the following line
cfg_paths=experiments/implicit_depth/test_refine.yaml

After that, run the testing code:

cd src
bash experiments/implicit_depth/run.sh

Training

First stage model (LIDF)

Change the following line in ${REPO_ROOT_DIR}/src/experiments/implicit_depth/run.sh:

cfg_paths=experiments/implicit_depth/train_lidf.yaml

After that, run the training code:

cd src
bash experiments/implicit_depth/run.sh

Second stage model (refinement model)

In ${REPO_ROOT_DIR}/src/experiments/implicit_depth/train_refine.yaml, set lidf_ckpt_path to the path of the best checkpoint in the first stage training. Change the following line in ${REPO_ROOT_DIR}/src/experiments/implicit_depth/run.sh:

cfg_paths=experiments/implicit_depth/train_refine.yaml

After that, run the training code:

cd src
bash experiments/implicit_depth/run.sh

Second stage model (refinement model) with hard negative mining

In ${REPO_ROOT_DIR}/src/experiments/implicit_depth/train_refine_hardneg.yaml, set lidf_ckpt_path to the path of the best checkpoint in the first stage training, set checkpoint_path to the path of the best checkpoint in the second stage training. Change the following line in ${REPO_ROOT_DIR}/src/experiments/implicit_depth/run.sh:

cfg_paths=experiments/implicit_depth/train_refine_hardneg.yaml

After that, run the training code:

cd src
bash experiments/implicit_depth/run.sh

License

This work is licensed under NVIDIA Source Code License - Non-commercial.

Citation

If you use this code for your research, please citing our work:

@inproceedings{zhu2021rgbd,
author    = {Luyang Zhu and Arsalan Mousavian and Yu Xiang and Hammad Mazhar and Jozef van Eenbergen and Shoubhik Debnath and Dieter Fox},
title     = {RGB-D Local Implicit Function for Depth Completion of Transparent Objects},
booktitle = {IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
year      = {2021}
}
Owner
NVIDIA Research Projects
NVIDIA Research Projects
CTF challenges and write-ups for MicroCTF 2021.

MicroCTF 2021 Qualifications About This repository contains CTF challenges and official write-ups for MicroCTF 2021 Qualifications. License Distribute

Shellmates 12 Dec 27, 2022
AirCode: A Robust Object Encoding Method

AirCode This repo contains source codes for the arXiv preprint "AirCode: A Robust Object Encoding Method" Demo Object matching comparison when the obj

Chen Wang 30 Dec 09, 2022
Implementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch

NÜWA - Pytorch (wip) Implementation of NÜWA, state of the art attention network for text to video synthesis, in Pytorch. This repository will be popul

Phil Wang 463 Dec 28, 2022
Faster RCNN pytorch windows

Faster-RCNN-pytorch-windows Faster RCNN implementation with pytorch for windows Open cmd, compile this comands: cd lib python setup.py build develop T

Hwa-Rang Kim 1 Nov 11, 2022
(AAAI2022) Style Mixing and Patchwise Prototypical Matching for One-Shot Unsupervised Domain Adaptive Semantic Segmentation

SM-PPM This is a Pytorch implementation of our paper "Style Mixing and Patchwise Prototypical Matching for One-Shot Unsupervised Domain Adaptive Seman

W-zx-Y 10 Dec 07, 2022
Exe-to-xlsm - Simple script to create VBscript of exe and inject to xlsm

🎁 Exe To Office Executable file injection to Office documents: .xlsm, .docm, .p

3 Jan 25, 2022
This is the official implementation of the paper "Object Propagation via Inter-Frame Attentions for Temporally Stable Video Instance Segmentation".

ObjProp Introduction This is the official implementation of the paper "Object Propagation via Inter-Frame Attentions for Temporally Stable Video Insta

Anirudh S Chakravarthy 6 May 03, 2022
Training Cifar-10 Classifier Using VGG16

opevcvdl-hw3 This project uses pytorch and Qt to achieve the requirements. Version Python 3.6 opencv-contrib-python 3.4.2.17 Matplotlib 3.1.1 pyqt5 5.

Kenny Cheng 3 Aug 17, 2022
Vehicle speed detection with python

Vehicle-speed-detection In the project simulate the tracker.py first then simulate the SpeedDetector.py. Finally, a new window pops up and the output

3 Dec 15, 2022
Unofficial PyTorch implementation of the Adaptive Convolution architecture for image style transfer

AdaConv Unofficial PyTorch implementation of the Adaptive Convolution architecture for image style transfer from "Adaptive Convolutions for Structure-

65 Dec 22, 2022
Pyramid Pooling Transformer for Scene Understanding

Pyramid Pooling Transformer for Scene Understanding Requirements: torch 1.6+ torchvision 0.7.0 timm==0.3.2 Validated on torch 1.6.0, torchvision 0.7.0

Yu-Huan Wu 119 Dec 29, 2022
AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation

AtlasNet [Project Page] [Paper] [Talk] AtlasNet: A Papier-Mâché Approach to Learning 3D Surface Generation Thibault Groueix, Matthew Fisher, Vladimir

577 Dec 17, 2022
SurvITE: Learning Heterogeneous Treatment Effects from Time-to-Event Data

SurvITE: Learning Heterogeneous Treatment Effects from Time-to-Event Data SurvITE: Learning Heterogeneous Treatment Effects from Time-to-Event Data Au

14 Nov 28, 2022
[CVPR 2021] Semi-Supervised Semantic Segmentation with Cross Pseudo Supervision

TorchSemiSeg [CVPR 2021] Semi-Supervised Semantic Segmentation with Cross Pseudo Supervision by Xiaokang Chen1, Yuhui Yuan2, Gang Zeng1, Jingdong Wang

Chen XiaoKang 387 Jan 08, 2023
Independent and minimal implementations of some reinforcement learning algorithms using PyTorch (including PPO, A3C, A2C, ...).

PyTorch RL Minimal Implementations There are implementations of some reinforcement learning algorithms, whose characteristics are as follow: Less pack

Gemini Light 4 Dec 31, 2022
Normal Learning in Videos with Attention Prototype Network

Codes_APN Official codes of CVPR21 paper: Normal Learning in Videos with Attention Prototype Network (https://arxiv.org/abs/2108.11055) Overview of ou

11 Dec 13, 2022
Educational API for 3D Vision using pose to control carton.

Educational API for 3D Vision using pose to control carton.

41 Jul 10, 2022
yufan 81 Dec 08, 2022
Cross-Image Region Mining with Region Prototypical Network for Weakly Supervised Segmentation

Cross-Image Region Mining with Region Prototypical Network for Weakly Supervised Segmentation The code of: Cross-Image Region Mining with Region Proto

LiuWeide 16 Nov 26, 2022
Unofficial implementation of PatchCore anomaly detection

PatchCore anomaly detection Unofficial implementation of PatchCore(new SOTA) anomaly detection model Original Paper : Towards Total Recall in Industri

Changwoo Ha 268 Dec 22, 2022