Plenoxels: Radiance Fields without Neural Networks, Code release WIP

Related tags

Deep Learningsvox2
Overview

Plenoxels: Radiance Fields without Neural Networks

Alex Yu*, Sara Fridovich-Keil*, Matthew Tancik, Qinhong Chen, Benjamin Recht, Angjoo Kanazawa

UC Berkeley

Website and video: https://alexyu.net/plenoxels

arXiv: https://arxiv.org/abs/2112.05131

Note: This is a preliminary release. We have not carefully tested everything, but feel that it would be better to first put the code out there.

Also, despite the name, it's not strictly intended to be a successor of svox

Citation:

@misc{yu2021plenoxels,
      title={Plenoxels: Radiance Fields without Neural Networks}, 
      author={{Alex Yu and Sara Fridovich-Keil} and Matthew Tancik and Qinhong Chen and Benjamin Recht and Angjoo Kanazawa},
      year={2021},
      eprint={2112.05131},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

This contains the official optimization code. A JAX implementation is also available at https://github.com/sarafridov/plenoxels. However, note that the JAX version is currently feature-limited, running in about 1 hour per epoch and only supporting bounded scenes (at present).

Fast optimization

Overview

Setup

First create the virtualenv; we recommend using conda:

conda env create -f environment.yml
conda activate plenoxel

Then clone the repo and install the library at the root (svox2), which includes a CUDA extension.

If your CUDA toolkit is older than 11, then you will need to install CUB as follows: conda install -c bottler nvidiacub. Since CUDA 11, CUB is shipped with the toolkit.

To install the main library, simply run

pip install .

In the repo root directory.

Getting datasets

We have backends for NeRF-Blender, LLFF, NSVF, and CO3D dataset formats, and the dataset will be auto-detected. Please get the NeRF-synthetic and LLFF datasets from:

https://drive.google.com/drive/folders/128yBriW1IG_3NJ5Rp7APSTZsJqdJdfc1

We provide a processed Tanks and temples dataset (with background) in NSVF format at: https://drive.google.com/file/d/1PD4oTP4F8jTtpjd_AQjCsL4h8iYFCyvO/view?usp=sharing

Note this data should be identical to that in NeRF++

Voxel Optimization (aka Training)

For training a single scene, see opt/opt.py. The launch script makes this easier.

Inside opt/, run ./launch.sh <exp_name> <GPU_id> <data_dir> -c <config>

Where <config> should be configs/syn.json for NeRF-synthetic scenes, configs/llff.json for forward-facing scenes, and configs/tnt.json for tanks and temples scenes, for example.

The dataset format will be auto-detected from data_dir. Checkpoints will be in ckpt/exp_name.

Evaluation

Use opt/render_imgs.py

Usage, (in opt/) python render_imgs.py <CHECKPOINT.npz> <data_dir>

By default this saves all frames, which is very slow. Add --no_imsave to avoid this.

Rendering a spiral

Use opt/render_imgs_circle.py

Usage, (in opt/) python render_imgs_circle.py <CHECKPOINT.npz> <data_dir>

Parallel task executor

We provide a parallel task executor based on the task manager from PlenOctrees to automatically schedule many tasks across sets of scenes or hyperparameters. This is used for evaluation, ablations, and hypertuning See opt/autotune.py. Configs in opt/tasks/*.json

For example, to automatically train and eval all synthetic scenes: you will need to change train_root and data_root in tasks/eval.json, then run:

python autotune.py -g '<space delimited GPU ids>' tasks/eval.json

For forward-facing scenes

python autotune.py -g '<space delimited GPU ids>' tasks/eval_ff.json

For Tanks and Temples scenes

python autotune.py -g '<space delimited GPU ids>' tasks/eval_tnt.json

Using a custom image set

First make sure you have colmap installed. Then

(in opt/) bash scripts/proc_colmap.sh <img_dir>

Where <img_dir> should be a directory directly containing png/jpg images from a normal perspective camera. For custom datasets we adopt a data format similar to that in NSVF https://github.com/facebookresearch/NSVF

You should be able to use this dataset directly afterwards. The format will be auto-detected.

To view the data use: python scripts/view_data.py <img_dir>

This should launch a server at localhost:8889

You may need to tune the TV. For forward-facing scenes, often making the TV weights 10x higher is helpful (configs/llff_hitv.json). For the real lego scene I used the config configs/custom.json.

Random tip: how to make pip install faster for native extensions

You may notice that this CUDA extension takes forever to install. A suggestion is using ninja. On Ubuntu, install it with sudo apt install ninja-build. Then set the environment variable MAX_JOBS to the number of CPUS to use in parallel (e.g. 12) in your shell startup script. This will enable parallel compilation and significantly improve iteration speed.

Owner
Alex Yu
Researcher at UC Berkeley
Alex Yu
Learning Multiresolution Matrix Factorization and its Wavelet Networks on Graphs

Project Learning Multiresolution Matrix Factorization and its Wavelet Networks on Graphs, https://arxiv.org/pdf/2111.01940.pdf. Authors Truong Son Hy

5 Jun 28, 2022
This folder contains the implementation of the multi-relational attribute propagation algorithm.

MrAP This folder contains the implementation of the multi-relational attribute propagation algorithm. It requires the package pytorch-scatter. Please

6 Dec 06, 2022
Code for "Learning Structural Edits via Incremental Tree Transformations" (ICLR'21)

Learning Structural Edits via Incremental Tree Transformations Code for "Learning Structural Edits via Incremental Tree Transformations" (ICLR'21) 1.

NeuLab 40 Dec 23, 2022
JupyterLite demo deployed to GitHub Pages 🚀

JupyterLite Demo JupyterLite deployed as a static site to GitHub Pages, for demo purposes. ✨ Try it in your browser ✨ ➡️ https://jupyterlite.github.io

JupyterLite 223 Jan 04, 2023
ML course - EPFL Machine Learning Course, Fall 2021

EPFL Machine Learning Course CS-433 Machine Learning Course, Fall 2021 Repository for all lecture notes, labs and projects - resources, code templates

EPFL Machine Learning and Optimization Laboratory 1k Jan 04, 2023
Implementation of popular bandit algorithms in batch environments.

batch-bandits Implementation of popular bandit algorithms in batch environments. Source code to our paper "The Impact of Batch Learning in Stochastic

Danil Provodin 2 Sep 11, 2022
An Active Automata Learning Library Written in Python

AALpy An Active Automata Learning Library AALpy is a light-weight active automata learning library written in pure Python. You can start learning auto

TU Graz - SAL Dependable Embedded Systems Lab (DES Lab) 78 Dec 30, 2022
codes for "Scheduled Sampling Based on Decoding Steps for Neural Machine Translation" (long paper of EMNLP-2022)

Scheduled Sampling Based on Decoding Steps for Neural Machine Translation (EMNLP-2021 main conference) Contents Overview Background Quick to Use Furth

Adaxry 13 Jul 25, 2022
Fake News Detection Using Machine Learning Methods

Fake-News-Detection-Using-Machine-Learning-Methods Fake news is always a real and dangerous issue. However, with the presence and abundance of various

Achraf Safsafi 1 Jan 11, 2022
Code of the paper "Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speech Recognition"

SEW (Squeezed and Efficient Wav2vec) The repo contains the code of the paper "Performance-Efficiency Trade-offs in Unsupervised Pre-training for Speec

ASAPP Research 67 Dec 01, 2022
Gradient representations in ReLU networks as similarity functions

Gradient representations in ReLU networks as similarity functions by Dániel Rácz and Bálint Daróczy. This repo contains the python code related to our

1 Oct 08, 2021
TextBPN Adaptive Boundary Proposal Network for Arbitrary Shape Text Detection

TextBPN Adaptive Boundary Proposal Network for Arbitrary Shape Text Detection; Accepted by ICCV2021. Note: The complete code (including training and t

S.X.Zhang 84 Dec 13, 2022
CTRL-C: Camera calibration TRansformer with Line-Classification

CTRL-C: Camera calibration TRansformer with Line-Classification This repository contains the official code and pretrained models for CTRL-C (Camera ca

57 Nov 14, 2022
Code release for NeurIPS 2020 paper "Co-Tuning for Transfer Learning"

CoTuning Official implementation for NeurIPS 2020 paper Co-Tuning for Transfer Learning. [News] 2021/01/13 The COCO 70 dataset used in the paper is av

THUML @ Tsinghua University 35 Sep 23, 2022
This initial strategy was developed specifically for larger pools and is based on taking a moving average and deriving Bollinger Bands to create a projected active liquidity range.

Gamma's Strategy One This initial strategy was developed specifically for larger pools and is based on taking a moving average and deriving Bollinger

Gamma Strategies 46 Dec 02, 2022
Predict stock movement with Machine Learning and Deep Learning algorithms

Project Overview Stock market movement prediction using LSTM Deep Neural Networks and machine learning algorithms Software and Library Requirements Th

Naz Delam 46 Sep 13, 2022
Pytorch Implementation of Auto-Compressing Subset Pruning for Semantic Image Segmentation

Pytorch Implementation of Auto-Compressing Subset Pruning for Semantic Image Segmentation Introduction ACoSP is an online pruning algorithm that compr

Merantix 8 Dec 07, 2022
Sub-tomogram-Detection - Deep learning based model for Cyro ET Sub-tomogram-Detection

Deep learning based model for Cyro ET Sub-tomogram-Detection High degree of stru

Siddhant Kumar 2 Feb 04, 2022
Testing the Facial Emotion Recognition (FER) algorithm on animations

PegHeads-Tutorial-3 Testing the Facial Emotion Recognition (FER) algorithm on animations

PegHeads Inc 2 Jan 03, 2022
A PyTorch implementation of "Pathfinder Discovery Networks for Neural Message Passing"

A PyTorch implementation of "Pathfinder Discovery Networks for Neural Message Passing" (WebConf 2021). Abstract In this work we propose Pathfind

Benedek Rozemberczki 49 Dec 01, 2022