Official code release for "GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis"

Related tags

Deep Learninggraf
Overview

GRAF


This repository contains official code for the paper GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis.

You can find detailed usage instructions for training your own models and using pre-trained models below.

If you find our code or paper useful, please consider citing

@inproceedings{Schwarz2020NEURIPS,
  title = {GRAF: Generative Radiance Fields for 3D-Aware Image Synthesis},
  author = {Schwarz, Katja and Liao, Yiyi and Niemeyer, Michael and Geiger, Andreas},
  booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
  year = {2020}
}

Installation

First you have to make sure that you have all dependencies in place. The simplest way to do so, is to use anaconda.

You can create an anaconda environment called graf using

conda env create -f environment.yml
conda activate graf

Next, for nerf-pytorch install torchsearchsorted. Note that this requires torch>=1.4.0 and CUDA >= v10.1. You can install torchsearchsorted via

cd submodules/nerf_pytorch
pip install -r requirements.txt
cd torchsearchsorted
pip install .
cd ../../../

Demo

You can now test our code via:

python eval.py configs/carla.yaml --pretrained --rotation_elevation

This script should create a folder results/carla_128_from_pretrained/eval/ where you can find generated videos varying camera pose for the Cars dataset.

Datasets

If you only want to generate images using our pretrained models you do not need to download the datasets. The datasets are only needed if you want to train a model from scratch.

Cars

To download the Cars dataset from the paper simply run

cd data
./download_carla.sh
cd ..

This creates a folder data/carla/ downloads the images as a zip file and extracts them to data/carla/. While we do not use camera poses in this project we provide them for completeness. Your can download them by running

cd data
./download_carla_poses.sh
cd ..

This downloads the camera intrinsics (single file, equal for all images) and extrinsics corresponding to each image.

Faces

Download celebA. Then replace data/celebA in configs/celebA.yaml with *PATH/TO/CELEBA*/Img/img_align_celebA.

Download celebA_hq. Then replace data/celebA_hq in configs/celebAHQ.yaml with *PATH/TO/CELEBA_HQ*.

Cats

Download the CatDataset. Run

cd data
python preprocess_cats.py PATH/TO/CATS/DATASET
cd ..

to preprocess the data and save it to data/cats. If successful this script should print: Preprocessed 9407 images.

Birds

Download CUB-200-2011 and the corresponding Segmentation Masks. Run

cd data
python preprocess_cub.py PATH/TO/CUB-200-2011 PATH/TO/SEGMENTATION/MASKS
cd ..

to preprocess the data and save it to data/cub. If successful this script should print: Preprocessed 8444 images.

Usage

When you have installed all dependencies, you are ready to run our pre-trained models for 3D-aware image synthesis.

Generate images using a pretrained model

To evaluate a pretrained model, run

python eval.py CONFIG.yaml --pretrained --fid_kid --rotation_elevation --shape_appearance

where you replace CONFIG.yaml with one of the config files in ./configs.

This script should create a folder results/EXPNAME/eval with FID and KID scores in fid_kid.csv, videos for rotation and elevation in the respective folders and an interpolation for shape and appearance, shape_appearance.png.

Note that some pretrained models are available for different image sizes which you can choose by setting data:imsize in the config file to one of the following values:

configs/carla.yaml: 
    data:imsize 64 or 128 or 256 or 512
configs/celebA.yaml:
    data:imsize 64 or 128
configs/celebAHQ.yaml:
    data:imsize 256 or 512

Train a model from scratch

To train a 3D-aware generative model from scratch run

python train.py CONFIG.yaml

where you replace CONFIG.yaml with your config file. The easiest way is to use one of the existing config files in the ./configs directory which correspond to the experiments presented in the paper. Note that this will train the model from scratch and will not resume training for a pretrained model.

You can monitor on http://localhost:6006 the training process using tensorboard:

cd OUTPUT_DIR
tensorboard --logdir ./monitoring --port 6006

where you replace OUTPUT_DIR with the respective output directory.

For available training options, please take a look at configs/default.yaml.

Evaluation of a new model

For evaluation of the models run

python eval.py CONFIG.yaml --fid_kid --rotation_elevation --shape_appearance

where you replace CONFIG.yaml with your config file.

Multi-View Consistency Check

You can evaluate the multi-view consistency of the generated images by running a Multi-View-Stereo (MVS) algorithm on the generated images. This evaluation uses COLMAP and make sure that you have COLMAP installed to run

python eval.py CONFIG.yaml --reconstruction

where you replace CONFIG.yaml with your config file. You can also evaluate our pretrained models via:

python eval.py configs/carla.yaml --pretrained --reconstruction

This script should create a folder results/EXPNAME/eval/reconstruction/ where you can find generated multi-view images in images/ and the corresponding 3D reconstructions in models/.

Further Information

GAN training

This repository uses Lars Mescheder's awesome framework for GAN training.

NeRF

We base our code for the Generator on this great Pytorch reimplementation of Neural Radiance Fields.

Object DGCNN and DETR3D, Our implementations are built on top of MMdetection3D.

Object DGCNN & DETR3D This repo contains the implementations of Object DGCNN (https://arxiv.org/abs/2110.06923) and DETR3D (https://arxiv.org/abs/2110

Wang, Yue 539 Jan 07, 2023
Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLPv2, RaftMLP, ConvMLP, ConvMixer in Jittor and PyTorch.

Unofficial Implementation of MLP-Mixer, gMLP, resMLP, Vision Permutator, S2MLPv2, RaftMLP, ConvMLP, ConvMixer in Jittor and PyTorch! Now, Rearrange and Reduce in einops.layers.jittor are support!!

130 Jan 08, 2023
yufan 81 Dec 08, 2022
A curated list of awesome Active Learning

Awesome Active Learning 🤩 A curated list of awesome Active Learning ! 🤩 Background (image source: Settles, Burr) What is Active Learning? Active lea

BAI Fan 431 Jan 03, 2023
Python parser for DTED data.

DTED Parser This is a package written in pure python (with help from numpy) to parse and investigate Digital Terrain Elevation Data (DTED) files. This

Ben Bonenfant 12 Dec 18, 2022
NL-Augmenter 🦎 → 🐍 A Collaborative Repository of Natural Language Transformations

NL-Augmenter 🦎 → 🐍 The NL-Augmenter is a collaborative effort intended to add transformations of datasets dealing with natural language. Transformat

684 Jan 09, 2023
Revisiting Self-Training for Few-Shot Learning of Language Model.

SFLM This is the implementation of the paper Revisiting Self-Training for Few-Shot Learning of Language Model. SFLM is short for self-training for few

15 Nov 19, 2022
TorchX: A PyTorch Extension Library for More Efficient Deep Learning

TorchX TorchX: A PyTorch Extension Library for More Efficient Deep Learning. @misc{torchx, author = {Ansheng You and Changxu Wang}, title = {T

Donny You 8 May 28, 2022
TensorFlow 2 implementation of the Yahoo Open-NSFW model

TensorFlow 2 implementation of the Yahoo Open-NSFW model

Bosco Yung 101 Jan 01, 2023
functorch is a prototype of JAX-like composable function transforms for PyTorch.

functorch is a prototype of JAX-like composable function transforms for PyTorch.

Facebook Research 1.2k Jan 09, 2023
Codes for AAAI22 paper "Learning to Solve Travelling Salesman Problem with Hardness-Adaptive Curriculum"

Paper For more details, please see our paper Learning to Solve Travelling Salesman Problem with Hardness-Adaptive Curriculum which has been accepted a

14 Sep 30, 2022
A little Python application to auto tag your photos with the power of machine learning.

Tag Machine A little Python application to auto tag your photos with the power of machine learning. Report a bug or request a feature Table of Content

Florian Torres 14 Dec 21, 2022
Continuous Time LiDAR odometry

CT-ICP: Elastic SLAM for LiDAR sensors This repository implements the SLAM CT-ICP (see our article), a lightweight, precise and versatile pure LiDAR o

385 Dec 29, 2022
[CVPR 2021] "The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models" Tianlong Chen, Jonathan Frankle, Shiyu Chang, Sijia Liu, Yang Zhang, Michael Carbin, Zhangyang Wang

The Lottery Tickets Hypothesis for Supervised and Self-supervised Pre-training in Computer Vision Models Codes for this paper The Lottery Tickets Hypo

VITA 59 Dec 28, 2022
Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

Official repository of OFA. Paper: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework

OFA Sys 1.4k Jan 08, 2023
scikit-learn: machine learning in Python

scikit-learn is a Python module for machine learning built on top of SciPy and is distributed under the 3-Clause BSD license. The project was started

scikit-learn 52.5k Jan 08, 2023
MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tricks

MEAL-V2 This is the official pytorch implementation of our paper: "MEAL V2: Boosting Vanilla ResNet-50 to 80%+ Top-1 Accuracy on ImageNet without Tric

Zhiqiang Shen 653 Dec 19, 2022
Walk with fastai

Shield: This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License. Walk with fastai What is this p

Walk with fastai 124 Dec 10, 2022
Code & Models for Temporal Segment Networks (TSN) in ECCV 2016

Temporal Segment Networks (TSN) We have released MMAction, a full-fledged action understanding toolbox based on PyTorch. It includes implementation fo

1.4k Jan 01, 2023
This repository is an official implementation of the paper MOTR: End-to-End Multiple-Object Tracking with TRansformer.

MOTR: End-to-End Multiple-Object Tracking with TRansformer This repository is an official implementation of the paper MOTR: End-to-End Multiple-Object

348 Jan 07, 2023