NPBG++: Accelerating Neural Point-Based Graphics

Related tags

Deep Learningnpbgpp
Overview

[CVPR 2022] NPBG++: Accelerating Neural Point-Based Graphics

Project Page | Paper

This repository contains the official Python implementation of the paper.

The repository also contains faithful implementation of NPBG.

We introduce the pipelines working with following datasets: ScanNet, NeRF-Synthetic, H3DS, DTU.

We follow the PyTorch3D convention for coordinate systems and cameras.

Changelog

  • [April 27, 2022] Added more example data and point clouds
  • [April 5, 2022] Initial code release

Dependencies

python -m venv ~/.venv/npbgplusplus
source ~/.venv/npbgplusplus/bin/activate
pip install -r requirements.txt

# install pytorch3d
curl -LO https://github.com/NVIDIA/cub/archive/1.10.0.tar.gz
tar xzf 1.10.0.tar.gz
export CUB_HOME=$PWD/cub-1.10.0
pip install "git+https://github.com/facebookresearch/[email protected]" --no-cache-dir --verbose

# install torch_scatter (2.0.8)
pip install torch-scatter -f https://pytorch-geometric.com/whl/torch-1.9.1+${CUDA}.html
# where ${CUDA} should be replaced by either cpu, cu101, cu102, or cu111 depending on your PyTorch installation.
# {CUDA} must match with torch.version.cuda (not with runtime or driver version)
# using 1.7.1 instead of 1.7.0 produces "incompatible cuda version" error

python setup.py build develop

Below you can see the examples on how to run the particular stages of different models on different datasets.

How to run NPBG++

Checkpoints and example data are available here.

Run training
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbgpp_scannet datasets=scannet_pretrain datasets.n_point=6e6 system=npbgpp_sphere system.visibility_scale=0.5 trainer.max_epochs=39 dataloader.train_data_mode=each trainer.reload_dataloaders_every_n_epochs=1
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbgpp_nerf datasets=nerf_blender_pretrain system=npbgpp_sphere system.visibility_scale=1.0 trainer.max_epochs=24 dataloader.train_data_mode=each weights_path=experiments/npbgpp_scannet/checkpoints/epoch38.ckpt
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbgpp_h3ds datasets=h3ds_pretrain system=npbgpp_sphere system.visibility_scale=1.0 trainer.max_epochs=24 dataloader.train_data_mode=each trainer.reload_dataloaders_every_n_epochs=1 weights_path=experiments/npbgpp_scannet/checkpoints/epoch38.ckpt
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbgpp_dtu datasets=dtu_pretrain system=npbgpp_sphere system.visibility_scale=1.0 trainer.max_epochs=36 dataloader.train_data_mode=each trainer.reload_dataloaders_every_n_epochs=1  weights_path=experiments/npbgpp_scannet/checkpoints/epoch38.ckpt
Run testing
python train_net.py trainer.gpus=1 hydra.run.dir=experiments/npbgpp_eval_scan118 datasets=dtu_one_scene datasets.data_root=$\{hydra:runtime.cwd\}/example/DTU_masked datasets.scene_name=scan118 system=npbgpp_sphere system.visibility_scale=1.0 weights_path=./checkpoints/npbgpp_dtu_nm_mvs_ft_epoch35.ckpt eval_only=true dataloader=small
Run finetuning of coefficients
python train_net.py trainer.gpus=1 hydra.run.dir=experiments/npbgpp_5ae021f2805c0854_ft datasets=h3ds_one_scene datasets.data_root=$\{hydra:runtime.cwd\}/example/H3DS datasets.selection_count=0 datasets.train_num_samples=2000 datasets.train_image_size=null datasets.train_random_shift=false datasets.train_random_zoom=[0.5,2.0] datasets.scene_name=5ae021f2805c0854 system=coefficients_ft system.max_points=1e6 system.descriptors_save_dir=$\{hydra:run.dir\}/descriptors trainer.max_epochs=20 system.descriptors_pretrained_dir=experiments/npbgpp_eval_5ae021f2805c0854/descriptors weights_path=$\{hydra:runtime.cwd\}/checkpoints/npbgpp_h3ds.ckpt dataloader=small
Run testing with finetuned coefficients
python train_net.py trainer.gpus=1 hydra.run.dir=experiments/npbgpp_5ae021f2805c0854_test datasets=h3ds_one_scene datasets.data_root=$\{hydra:runtime.cwd\}/example/H3DS datasets.selection_count=0 datasets.scene_name=5ae021f2805c0854 system=coefficients_ft system.max_points=1e6 system.descriptors_save_dir=$\{hydra:run.dir\}/descriptors system.descriptors_pretrained_dir=experiments/npbgpp_5ae021f2805c0854_ft/descriptors weights_path=experiments/npbgpp_5ae021f2805c0854_ft/checkpoints/last.ckpt dataloader=small eval_only=true

How to run NPBG

Run pretraining
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbg_scannet datasets=scannet_pretrain datasets.train_random_zoom=[0.5,2.0] datasets.train_image_size=512 datasets.selection_count=0 system=npbg system.descriptors_save_dir=experiments/npbg_scannet/result/descriptors trainer.max_epochs=39 dataloader.train_data_mode=each trainer.reload_dataloaders_every_n_epochs=1 trainer.limit_val_batches=0 system.max_points=11e6
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbg_nerf datasets=nerf_blender_pretrain datasets.train_random_zoom=[0.5,2.0] datasets.train_image_size=512 datasets.selection_count=0 system=npbg system.descriptors_save_dir=experiments/npbg_nerf/result/descriptors trainer.max_epochs=24 dataloader.train_data_mode=each trainer.reload_dataloaders_every_n_epochs=1 trainer.limit_val_batches=0 system.max_points=4e6
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbg_h3ds datasets=h3ds_pretrain datasets.train_random_zoom=[0.5,2.0] datasets.train_image_size=null datasets.train_random_shift=false datasets.selection_count=0 system=npbg system.descriptors_save_dir=experiments/npbg_h3ds/result/descriptors trainer.max_epochs=24 dataloader.train_data_mode=each trainer.reload_dataloaders_every_n_epochs=1 trainer.limit_val_batches=0 system.max_points=3e6  # Submitted batch job 1175175
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbg_dtu_nm datasets=dtu_pretrain datasets.train_random_zoom=[0.5,2.0] datasets.train_image_size=512 datasets.selection_count=0 system=npbg system.descriptors_save_dir=experiments/npbg_dtu_nm/result/descriptors trainer.max_epochs=36 dataloader.train_data_mode=each trainer.reload_dataloaders_every_n_epochs=1 trainer.limit_val_batches=0 system.max_points=3e6
Run fine-tuning on 1 scene
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbg_scannet_0045 datasets=scannet_one_scene datasets.scene_name=scene0045_00 datasets.n_point=6e6 datasets.train_random_zoom=[0.5,2.0] datasets.train_image_size=512 datasets.selection_count=0 system=npbg system.descriptors_save_dir=experiments/npbg_scannet_0045/result/descriptors system.max_scenes_per_train_epoch=1 trainer.max_epochs=20 weights_path=experiments/npbg_scannet/result/checkpoints/epoch38.ckpt system.max_points=6e6
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbg_nerf_hotdog datasets=nerf_blender_one_scene datasets.scene_name=hotdog datasets.train_random_zoom=[0.5,2.0] datasets.train_image_size=512 datasets.selection_count=0 system=npbg system.descriptors_save_dir=npbgplusplus/experiments/npbg_nerf_hotdog/result/descriptors system.max_scenes_per_train_epoch=1 trainer.max_epochs=20 weights_path=experiments/npbg_nerf/result/checkpoints/epoch23.ckpt system.max_points=4e6
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbg_h3ds_5ae021f2805c0854 datasets=h3ds_one_scene datasets.scene_name=5ae021f2805c0854 datasets.train_random_zoom=[0.5,2.0] datasets.train_image_size=null datasets.train_random_shift=false datasets.selection_count=0 system=npbg system.descriptors_save_dir=experiments/npbg_h3ds_5ae021f2805c0854/result/descriptors system.max_scenes_per_train_epoch=1 trainer.max_epochs=20 weights_path=experiments/npbg_h3ds/result/checkpoints/epoch23.ckpt system.max_points=3e6
python train_net.py trainer.gpus=4 hydra.run.dir=experiments/npbg_dtu_nm_scan110 datasets=dtu_one_scene datasets.scene_name=scan110 datasets.train_random_zoom=[0.5,2.0] datasets.train_image_size=512 datasets.selection_count=0 system=npbg system.descriptors_save_dir=experiments/npbg_dtu_nm_scan110/result/descriptors system.max_scenes_per_train_epoch=1 trainer.max_epochs=20 weights_path=experiments/npbg_dtu_nm/result/checkpoints/epoch35.ckpt system.max_points=3e6

Citation

If you find our work useful in your research, please consider citing:

@article{rakhimov2022npbg++,
  title={NPBG++: Accelerating Neural Point-Based Graphics},
  author={Rakhimov, Ruslan and Ardelean, Andrei-Timotei and Lempitsky, Victor and Burnaev, Evgeny},
  journal={arXiv preprint arXiv:2203.13318},
  year={2022}
}

License

See the LICENSE for more details.

Owner
Ruslan Rakhimov
Ruslan Rakhimov
A fast Evolution Strategy implementation in Python

Evostra: Evolution Strategy for Python Evolution Strategy (ES) is an optimization technique based on ideas of adaptation and evolution. You can learn

Mika 251 Dec 08, 2022
The codes and related files to reproduce the results for Image Similarity Challenge Track 1.

ISC-Track1-Submission The codes and related files to reproduce the results for Image Similarity Challenge Track 1. Required dependencies To begin with

Wenhao Wang 115 Jan 02, 2023
CPPE - 5 (Medical Personal Protective Equipment) is a new challenging object detection dataset

CPPE - 5 CPPE - 5 (Medical Personal Protective Equipment) is a new challenging dataset with the goal to allow the study of subordinate categorization

Rishit Dagli 53 Dec 17, 2022
Volumetric Correspondence Networks for Optical Flow, NeurIPS 2019.

VCN: Volumetric correspondence networks for optical flow [project website] Requirements python 3.6 pytorch 1.1.0-1.3.0 pytorch correlation module (opt

Gengshan Yang 144 Dec 06, 2022
Code implementation of Data Efficient Stagewise Knowledge Distillation paper.

Data Efficient Stagewise Knowledge Distillation Table of Contents Data Efficient Stagewise Knowledge Distillation Table of Contents Requirements Image

IvLabs 112 Dec 02, 2022
This project aims to be a handler for input creation and running of multiple RICEWQ simulations.

What is autoRICEWQ? This project aims to be a handler for input creation and running of multiple RICEWQ simulations. What is RICEWQ? From the descript

Yass Fuentes 1 Feb 01, 2022
Anomaly detection in multi-agent trajectories: Code for training, evaluation and the OpenAI highway simulation.

Anomaly Detection in Multi-Agent Trajectories for Automated Driving This is the official project page including the paper, code, simulation, baseline

12 Dec 02, 2022
🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱

Monitor deep learning model training and hardware usage from mobile. 🔥 Features Monitor running experiments from mobile phone (or laptop) Monitor har

labml.ai 1.2k Dec 25, 2022
The toolkit to generate auto labeled datasets

Ozeu Ozeu is the toolkit to autolabal dataset for instance segmentation. You can generate datasets labaled with segmentation mask and bounding box fro

Xiong Jie 28 Mar 28, 2022
Versatile Generative Language Model

Versatile Generative Language Model This is the implementation of the paper: Exploring Versatile Generative Language Model Via Parameter-Efficient Tra

Zhaojiang Lin 17 Dec 02, 2022
Analysis code and Latex source of the manuscript describing the conditional permutation test of confounding bias in predictive modelling.

Git repositoty of the manuscript entitled Statistical quantification of confounding bias in predictive modelling by Tamas Spisak The manuscript descri

PNI - Predictive Neuroimaging Lab, University Hospital Essen, Germany 0 Nov 22, 2021
Codebase for Inducing Causal Structure for Interpretable Neural Networks

Interchange Intervention Training (IIT) Codebase for Inducing Causal Structure for Interpretable Neural Networks Release Notes 12/01/2021: Code and Pa

Zen 6 Oct 10, 2022
A framework for GPU based high-performance medical image processing and visualization

FAST is an open-source cross-platform framework with the main goal of making it easier to do high-performance processing and visualization of medical images on heterogeneous systems utilizing both mu

Erik Smistad 315 Dec 30, 2022
Probabilistic-Monocular-3D-Human-Pose-Estimation-with-Normalizing-Flows

Probabilistic-Monocular-3D-Human-Pose-Estimation-with-Normalizing-Flows This is the official implementation of the ICCV 2021 Paper "Probabilistic Mono

62 Nov 23, 2022
Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation

Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation Introduction This is a PyTorch

XMed-Lab 30 Sep 23, 2022
the code used for the preprint Embedding-based Instance Segmentation of Microscopy Images.

EmbedSeg Introduction This repository hosts the version of the code used for the preprint Embedding-based Instance Segmentation of Microscopy Images.

JugLab 88 Dec 25, 2022
"Neural Turing Machine" in Tensorflow

Neural Turing Machine in Tensorflow Tensorflow implementation of Neural Turing Machine. This implementation uses an LSTM controller. NTM models with m

Taehoon Kim 1k Dec 06, 2022
Tensorflow AffordanceNet and AffContext implementations

AffordanceNet and AffContext This is tensorflow AffordanceNet and AffContext implementations. Both are implemented and tested with tensorflow 2.3. The

Beatriz Pérez 6 Dec 01, 2022
Official implementation of Rich Semantics Improve Few-Shot Learning (BMVC, 2021)

Rich Semantics Improve Few-Shot Learning Paper Link Abstract : Human learning benefits from multi-modal inputs that often appear as rich semantics (e.

Mohamed Afham 11 Jul 26, 2022
CSAW-M: An Ordinal Classification Dataset for Benchmarking Mammographic Masking of Cancer

CSAW-M This repository contains code for CSAW-M: An Ordinal Classification Dataset for Benchmarking Mammographic Masking of Cancer. Source code for tr

Yue Liu 7 Oct 11, 2022