(Arxiv 2021) NeRF--: Neural Radiance Fields Without Known Camera Parameters

Overview

NeRF--: Neural Radiance Fields Without Known Camera Parameters

Project Page | Arxiv | Colab Notebook | Data

Zirui Wang¹, Shangzhe Wu², Weidi Xie², Min Chen³, Victor Adrian Prisacariu¹.

¹Active Vision Lab + ²Visual Geometry Group + ³e-Research Centre, University of Oxford.

Overview

We provide 3 training targets in this repository, under the tasks directory:

  1. task/nerfmm/train.py: This is our main training script for the NeRF-LLFF dataset, which estimates camera poses, focal lenghts and a NeRF jointly and monitors the absolute trajectory error (ATE) between our estimation of camera parameters and COLMAP estimation during training. This target can also start training from a COLMAP initialisation and refine the COLMAP camera parameters.
  2. task/refine_nerfmm/train.py: This is the training script that refines a pretrained nerfmm system.
  3. task/any_folder/train.py: This is a training script that takes a folder that contains forward-facing images and trains with our nerfmm system without making any comparison with COLMAP. It is similar to what we offer in our CoLab notebook and we treat this any_folder target as a playgraound, where users can try novel view synthesis by just providing an image folder and do not care how the camera parameter estimation compares with COLMAP.

For each target, we provide relevant utilities to evaluate our system. Specifically,

  • for the nerfmm target, we provide three utility files:
    • eval.py to evaluate image rendering quality on validation splits with PSNR, SSIM and LPIPS, i.e, results in Table 1.
    • spiral.py to render novel views using a spiral camera trajectory, i.e. results in Figure 1.
    • vis_learned_poses.py to visualise our camera parameter estimation with COLMAP estimation in 3D. It also computes ATE between them, i.e. E1 in Table 2.
  • for the refine_nerfmm target, all utilities in nerfmm target above are compatible with refine_nerfmm target, since it just refines a pretrained nerfmm system.
  • for the any_folder target, it has its own spiral.py and vis_learned_poses.py utilities, as it does not compare with COLMAP. It does not have a eval.py file as this target is treated as a playground and does not split images to train/validation sets. It only provides novel view synthesis results via the spiral.py file.

Table of Content

Environment

We provide a requirement.yml file to set up a conda environment:

git clone https://github.com/ActiveVisionLab/nerfmm.git
cd nerfmm
conda env create -f environment.yml

Generally, our code should be able to run with any pytorch >= 1.1 .

(Optional) Install open3d for visualisation. You might need a physical monitor to install this lib.

pip install open3d

Get Data

We use the NeRF-LLFF dataset with two small structural changes:

  1. We remove their image_4 and image_8 folder and downsample images to any desirable resolution during data loading dataloader/with_colmap.py, by calling PyTorch's interpolate function.
  2. We explicitly generate two txt files for train/val image ids. i.e. take every 8th image as the validation set, as in the official NeRF train/val split. The only difference is that we store them as txt files while NeRF split them during data loading. The file produces these two txt files is utils/split_dataset.py.

In addition to the NeRF-LLFF dataset, we provide two demo scenes to demonstrate how to use the any_folder target.

We pack the re-structured LLFF data and our data to a tar ball (~1.8G), to get it, run:

wget https://www.robots.ox.ac.uk/~ryan/nerfmm2021/nerfmm_release_data.tar.gz

Untar the data:

tar -xzvf path/to/the/tar.gz

Training

We show how to:

  1. train a nerfmm from scratch, i.e. initialise camera poses with identity matrices and focal lengths with image resolution:
    python tasks/nerf/train.py \
    --base_dir='path/to/nerfmm_release/data' \
    --scene_name='LLFF/fern'
  2. train a nerfmm from COLMAP initialisation:
    python tasks/nerf/train.py \
    --base_dir='path/to/nerfmm_release/data' \
    --scene_name='LLFF/fern' \
    --start_refine_pose_epoch=1000 \
    --start_refine_focal_epoch=1000
    This command initialises a nerfmm target with COLMAP parameters, trains with them for 1000 epochs, and starts refining those parameters after 1000 epochs.
  3. train a nerfmm from a pretrained nerfmm:
    python tasks/refine_nerfmm/train.py \
    --base_dir='path/to/nerfmm_release/data' \
    --scene_name='LLFF/fern' --start_refine_epoch=1000 \
    --ckpt_dir='path/to/a/dir/contains/nerfmm/ckpts'
    This command initialises a refine_nerfmm target with a set of pretrained nerfmm parameters, trains with them for 1000 epochs, and starts refining those parameters after 1000 epochs.
  4. train an any_folder from scratch given an image folder:
    python tasks/any_folder/train.py \
    --base_dir='path/to/nerfmm_release/data' \
    --scene_name='any_folder_demo/desk'
    This command trains an any_folder target using a provided demo scene desk.

(Optional) set a symlink to the downloaded data:

mkdir data_dir  # do it in this nerfmm repo
cd data_dir
ln -s /path/to/downloaded/data ./nerfmm_release_data
cd ..

this can simplify the above training commands, for example:

python tasks/nerfmm/train.py

Evaluation

Compute image quality metrics

Call eval.py in nerfmm target:

python tasks/nerfmm/eval.py \
--base_dir='path/to/nerfmm_release/data' \
--scene_name='LLFF/fern' \
--ckpt_dir='path/to/a/dir/contains/nerfmm/ckpts'

This file can be used to evaluate a checkpoint trained with refine_nerfmm target. For some scenes, you might need to tweak with --opt_eval_lr option to get the best results. Common values for opt_eval_lr are 0.01 / 0.005 / 0.001 / 0.0005 / 0.0001. The default value is 0.001. Overall, it finds validation poses that can produce highest PSNR on validation set while freezing NeRF and focal lengths. We do this because the learned camera pose space is different from the COLMAP estimated camera pose space.

Render novel views

Call spiral.py in each target. The spiral.py in nerfmm is compatible with refine_nerfmm target:

python spiral.py \
--base_dir='path/to/nerfmm_release/data' \
--scene_name='LLFF/fern' \
--ckpt_dir='path/to/a/dir/contains/nerfmm/ckpts'

Visualise estimated poses in 3D

Call vis_learned_poses.py in each target. The vis_learned_poses.py in nerfmm is compatible with refine_nerfmm target:

python spiral.py \
--base_dir='path/to/nerfmm_release/data' \
--scene_name='LLFF/fern' \
--ckpt_dir='path/to/a/dir/contains/nerfmm/ckpts'

Acknowledgement

Shangzhe Wu is supported by Facebook Research. Weidi Xie is supported by Visual AI (EP/T028572/1).

The authors would like to thank Tim Yuqing Tang for insightful discussions and proofreading.

During our NeRF implementation, we referenced several open sourced NeRF implementations, and we thank their contributions. Specifically, we referenced functions from nerf and nerf-pytorch, and borrowed/modified code from nerfplusplus and nerf_pl. We especially appreciate the detailed code comments and git issue answers in nerf_pl.

Citation

@article{wang2021nerfmm,
  title={Ne{RF}$--$: Neural Radiance Fields Without Known Camera Parameters},
  author={Zirui Wang and Shangzhe Wu and Weidi Xie and Min Chen and Victor Adrian Prisacariu},
  journal={arXiv preprint arXiv:2102.07064},
  year={2021}
}
Owner
Active Vision Laboratory
Active Vision Laboratory
The code release of paper 'Domain Generalization for Medical Imaging Classification with Linear-Dependency Regularization' NIPS 2020.

Domain Generalization for Medical Imaging Classification with Linear Dependency Regularization The code release of paper 'Domain Generalization for Me

Yufei Wang 56 Dec 28, 2022
基于Paddle框架的fcanet复现

fcanet-Paddle 基于Paddle框架的fcanet复现 fcanet 本项目基于paddlepaddle框架复现fcanet,并参加百度第三届论文复现赛,将在2021年5月15日比赛完后提供AIStudio链接~敬请期待 参考项目: frazerlin-fcanet 数据准备 本项目已挂

QuanHao Guo 7 Mar 07, 2022
Code for PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Relighting and Material Editing

PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Relighting and Material Editing CVPR 2021. Project page: https://kai-46.github.io/

Kai Zhang 141 Dec 14, 2022
Official implementation of NeurIPS'21: Implicit SVD for Graph Representation Learning

isvd Official implementation of NeurIPS'21: Implicit SVD for Graph Representation Learning If you find this code useful, you may cite us as: @inprocee

Sami Abu-El-Haija 16 Jan 08, 2023
Training a Resilient Q-Network against Observational Interference, Causal Inference Q-Networks

Obs-Causal-Q-Network AAAI 2022 - Training a Resilient Q-Network against Observational Interference Preprint | Slides | Colab Demo | Environment Setup

23 Nov 21, 2022
Keras code and weights files for popular deep learning models.

Trained image classification models for Keras THIS REPOSITORY IS DEPRECATED. USE THE MODULE keras.applications INSTEAD. Pull requests will not be revi

François Chollet 7.2k Dec 29, 2022
Code for paper ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization in the Loop.

Who Left the Dogs Out? Evaluation and demo code for our ECCV 2020 paper: Who Left the Dogs Out? 3D Animal Reconstruction with Expectation Maximization

Benjamin Biggs 29 Dec 28, 2022
Supplementary materials to "Spin-optomechanical quantum interface enabled by an ultrasmall mechanical and optical mode volume cavity" by H. Raniwala, S. Krastanov, M. Eichenfield, and D. R. Englund, 2022

Supplementary materials to "Spin-optomechanical quantum interface enabled by an ultrasmall mechanical and optical mode volume cavity" by H. Raniwala,

Stefan Krastanov 1 Jan 17, 2022
This is an official implementation of our CVPR 2021 paper "Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression" (https://arxiv.org/abs/2104.02300)

Bottom-Up Human Pose Estimation Via Disentangled Keypoint Regression Introduction In this paper, we are interested in the bottom-up paradigm of estima

HRNet 367 Dec 27, 2022
Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Training and Effective Adaptation", Haoxiang Wang, Han Zhao, Bo Li.

Bridging Multi-Task Learning and Meta-Learning Code for the ICML 2021 paper "Bridging Multi-Task Learning and Meta-Learning: Towards Efficient Trainin

AI Secure 57 Dec 15, 2022
(Personalized) Page-Rank computation using PyTorch

torch-ppr This package allows calculating page-rank and personalized page-rank via power iteration with PyTorch, which also supports calculation on GP

Max Berrendorf 69 Dec 03, 2022
pytorch implementation of openpose including Hand and Body Pose Estimation.

pytorch-openpose pytorch implementation of openpose including Body and Hand Pose Estimation, and the pytorch model is directly converted from openpose

Hzzone 1.4k Jan 07, 2023
Like ThreeJS but for Python and based on wgpu

pygfx A render engine, inspired by ThreeJS, but for Python and targeting Vulkan/Metal/DX12 (via wgpu). Introduction This is a Python render engine bui

139 Jan 07, 2023
Semantic code search implementation using Tensorflow framework and the source code data from the CodeSearchNet project

Semantic Code Search Semantic code search implementation using Tensorflow framework and the source code data from the CodeSearchNet project. The model

Chen Wu 24 Nov 29, 2022
Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation

Dynamic Neural Representational Decoders for High-Resolution Semantic Segmentation Requirements This repository needs mmsegmentation Training To train

Adelaide Intelligent Machines (AIM) Group 7 Sep 12, 2022
We will release the code of "ConTNet: Why not use convolution and transformer at the same time?" in this repo

ConTNet Introduction ConTNet (Convlution-Tranformer Network) is proposed mainly in response to the following two issues: (1) ConvNets lack a large rec

93 Nov 08, 2022
Code for "Learning Graph Cellular Automata"

Learning Graph Cellular Automata This code implements the experiments from the NeurIPS 2021 paper: "Learning Graph Cellular Automata" Daniele Grattaro

Daniele Grattarola 37 Oct 26, 2022
Transformer Tracking (CVPR2021)

TransT - Transformer Tracking [CVPR2021] Official implementation of the TransT (CVPR2021) , including training code and trained models. We are revisin

chenxin 465 Jan 06, 2023
Time Delayed NN implemented in pytorch

Pytorch Time Delayed NN Time Delayed NN implemented in PyTorch. Usage kernels = [(1, 25), (2, 50), (3, 75), (4, 100), (5, 125), (6, 150)] tdnn = TDNN

Daniil Gavrilov 79 Aug 04, 2022
Light-Head R-CNN

Light-head R-CNN Introduction We release code for Light-Head R-CNN. This is my best practice for my research. This repo is organized as follows: light

jemmy li 835 Dec 06, 2022