Build upon neural radiance fields to create a scene-specific implicit 3D semantic representation, Semantic-NeRF

Overview

Semantic-NeRF: Semantic Neural Radiance Fields

Project Page | Video | Paper | Data

In-Place Scene Labelling and Understanding with Implicit Scene Representation
Shuaifeng Zhi, Tristan Laidlow, Stefan Leutenegger, Andrew J. Davison,
Dyson Robotics Laboratory at Imperial College
Published in ICCV 2021 (Oral Presentation)

We build upon neural radiance fields to create a scene-specific implicit 3D semantic representation, Semantic-NeRF.

Getting Started

For flawless reproduction of our results, the Ubuntu OS 20.04 is recommended. The models have been tested using Python 3.7, Pytorch 1.6.0, CUDA10.1. Higher versions should also perform similarly.

Dependencies

Main python dependencies are listed below:

  • Python >=3.7
  • torch>=1.6.0 (integrate searchsorted API, otherwise need to use the third party implementation SearchSorted)
  • cudatoolkit>=10.1

Following packages are used for 3D mesh reconstruction:

  • trimesh==3.9.9
  • open3d==0.12.0

With Anaconda, you can simply create a virtual environment and install dependencies with CONDA by:

  • conda create -n semantic_nerf python=3.7
  • conda activate semantic_nerf
  • pip install -r requirements.txt

Datasets

We mainly use Replica and ScanNet datasets for experiments, where we train a new Semantic-NeRF model on each 3D scene. Other similar indoor datasets with colour images, semantic labels and poses can also be used.

We also provide pre-rendered Replica data that can be directly used by Semantic-NeRF.

Running code

After cloning the codes, we can start to run Semantic-NeRF in the root directory of the repository.

Semantic-NeRF training

For standard Semantic-NeRF training with full dense semantic supervision. You can simply run following command with a chosen config file specifying data directory and hyper-params.

python3 train_SSR_main.py --config_file /SSR/configs/SSR_room0_config.yaml

Different working modes and set-ups can be chosen via commands:

Semantic View Synthesis with Sparse Labels:

python3 train_SSR_main.py --sparse_views --sparse_ratio 0.6

Sparse ratio here is the portion of dropped frames in the training sequence.

Pixel-wise Denoising Task:

python3 train_SSR_main.py --pixel_denoising --pixel_noise_ratio 0.5

We could also use a sparse set of frames along with denoising task:

python3 train_SSR_main.py --pixel_denoising --pixel_noise_ratio 0.5 --sparse_views --sparse_ratio 0.6

Region-wise Denoising task (For Replica Room2):

python3 train_SSR_main.py --region_denoising --region_noise_ratio 0.3

The argument uniform_flip corresponds to the two modes of "Even/Sort"in region-wise denoising task.

Super-Resolution Task:

For super-resolution with dense labels, please run

python3 train_SSR_main.py --super_resolution --sr_factor 8 --dense_sr

For super-resolution with sparse labels, please run

python3 train_SSR_main.py --super_resolution --sr_factor 8

Label Propagation Task:

For label propagation task with single-click seed regions, please run

python3 train_SSR_main.py --label_propagation --partial_perc 0

In order to improve reproducibility, for denoising and label-propagation tasks, we can also include --visualise_save and --load_saved to save/load randomly generated labels.

3D Reconstruction of Replica Scenes

We also provide codes for extracting 3D semantic mesh from a trained Seamntic-NeRF model.

python3 SSR/extract_colour_mesh.py --sem --mesh_dir PATH_TO_MESH --mesh_dir PATH_TO_MESH  --training_data_dir PATH_TO_TRAINING_DATA --save_dir PATH_TO_SAVE_DIR

For more demos and qualitative results, please check our project page and video.

Acknowledgements

Thanks nerf, nerf-pytorch and nerf_pl for providing nice and inspiring implementations of NeRF.

Citation

If you found this code/work to be useful in your own research, please consider citing the following:

@inproceedings{Zhi:etal:ICCV2021,
  title={In-Place Scene Labelling and Understanding with Implicit Scene Representation},
  author={Shuaifeng Zhi and Tristan Laidlow and Stefan Leutenegger and Andrew J. Davison},
  booktitle=ICCV,
  year={2021}
}

Contact

If you have any questions, please contact [email protected] or [email protected].

Owner
Shuaifeng Zhi
PhD student in Dyson Robotics Laboratory at Imperial College London
Shuaifeng Zhi
An AI Assistant More Than a Toolkit

tymon An AI Assistant More Than a Toolkit The reason for creating framework tymon is simple. making AI more like an assistant, helping us to complete

TymonXie 46 Oct 24, 2022
Code for the paper "Curriculum Dropout", ICCV 2017

Curriculum Dropout Dropout is a very effective way of regularizing neural networks. Stochastically "dropping out" units with a certain probability dis

Pietro Morerio 21 Jan 02, 2022
From Fidelity to Perceptual Quality: A Semi-Supervised Approach for Low-Light Image Enhancement (CVPR'2020)

Under-exposure introduces a series of visual degradation, i.e. decreased visibility, intensive noise, and biased color, etc. To address these problems, we propose a novel semi-supervised learning app

Yang Wenhan 117 Jan 03, 2023
official Pytorch implementation of ICCV 2021 paper FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting.

FuseFormer: Fusing Fine-Grained Information in Transformers for Video Inpainting By Rui Liu, Hanming Deng, Yangyi Huang, Xiaoyu Shi, Lewei Lu, Wenxiu

77 Dec 27, 2022
Element selection for functional materials discovery by integrated machine learning of atomic contributions to properties

Element selection for functional materials discovery by integrated machine learning of atomic contributions to properties 8.11.2021 Andrij Vasylenko I

Leverhulme Research Centre for Functional Materials Design 4 Dec 20, 2022
Best Practices on Recommendation Systems

Recommenders What's New (February 4, 2021) We have a new relase Recommenders 2021.2! It comes with lots of bug fixes, optimizations and 3 new algorith

Microsoft 14.8k Jan 03, 2023
Implementations of orthogonal and semi-orthogonal convolutions in the Fourier domain with applications to adversarial robustness

Orthogonalizing Convolutional Layers with the Cayley Transform This repository contains implementations and source code to reproduce experiments for t

CMU Locus Lab 36 Dec 30, 2022
🚗 INGI Dakar 2K21 - Be the first one on the finish line ! 🚗

🚗 INGI Dakar 2K21 - Be the first one on the finish line ! 🚗 This year's first semester Club Info challenge will put you at the head of a car racing

ClubINFO INGI (UCLouvain) 6 Dec 10, 2021
Source code and dataset for ACL2021 paper: "ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning".

ERICA Source code and dataset for ACL2021 paper: "ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive L

THUNLP 75 Nov 02, 2022
Flax is a neural network ecosystem for JAX that is designed for flexibility.

Flax: A neural network library and ecosystem for JAX designed for flexibility Overview | Quick install | What does Flax look like? | Documentation See

Google 3.9k Jan 02, 2023
Code implementing "Improving Deep Learning Interpretability by Saliency Guided Training"

Saliency Guided Training Code implementing "Improving Deep Learning Interpretability by Saliency Guided Training" by Aya Abdelsalam Ismail, Hector Cor

8 Sep 22, 2022
VOneNet: CNNs with a Primary Visual Cortex Front-End

VOneNet: CNNs with a Primary Visual Cortex Front-End A family of biologically-inspired Convolutional Neural Networks (CNNs). VOneNets have the followi

The DiCarlo Lab at MIT 99 Dec 22, 2022
MediaPipe is a an open-source framework from Google for building multimodal

MediaPipe is a an open-source framework from Google for building multimodal (eg. video, audio, any time series data), cross platform (i.e Android, iOS, web, edge devices) applied ML pipelines. It is

Bhavishya Pandit 3 Sep 30, 2022
Pytorch implementation of "Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech"

GradTTS Unofficial Pytorch implementation of "Grad-TTS: A Diffusion Probabilistic Model for Text-to-Speech" (arxiv) About this repo This is an unoffic

HeyangXue1997 103 Dec 23, 2022
Implementation of TimeSformer, a pure attention-based solution for video classification

TimeSformer - Pytorch Implementation of TimeSformer, a pure and simple attention-based solution for reaching SOTA on video classification.

Phil Wang 602 Jan 03, 2023
Implementation for Panoptic-PolarNet (CVPR 2021)

Panoptic-PolarNet This is the official implementation of Panoptic-PolarNet. [ArXiv paper] Introduction Panoptic-PolarNet is a fast and robust LiDAR po

Zixiang Zhou 126 Jan 01, 2023
Unofficial Pytorch Lightning implementation of Contrastive Syn-to-Real Generalization (ICLR, 2021)

Unofficial Pytorch Lightning implementation of Contrastive Syn-to-Real Generalization (ICLR, 2021)

Gyeongjae Choi 17 Sep 23, 2021
[ICCV 2021] Self-supervised Monocular Depth Estimation for All Day Images using Domain Separation

ADDS-DepthNet This is the official implementation of the paper Self-supervised Monocular Depth Estimation for All Day Images using Domain Separation I

LIU_LINA 52 Nov 24, 2022
Codebase for testing whether hidden states of neural networks encode discrete structures.

structural-probes Codebase for testing whether hidden states of neural networks encode discrete structures. Based on the paper A Structural Probe for

John Hewitt 349 Dec 17, 2022