Datasets for new state-of-the-art challenge in disentanglement learning

Overview

High resolution disentanglement datasets

This repository contains the Falcor3D and Isaac3D datasets, which present a state-of-the-art challenge for controllable generation in terms of image resolution, photorealism, and richness of style factors, as compared to existing disentanglement datasets.

Falor3D

The Falcor3D dataset consists of 233,280 images based on the 3D scene of a living room, where each image has a resolution of 1024x1024. The meta code corresponds to all possible combinations of 7 factors of variation:

  • lighting_intensity (5)
  • lighting_x-dir (6)
  • lighting_y-dir (6)
  • lighting_z-dir (6)
  • camera_x-pos (6)
  • camera_y-pos (6)
  • camera_z-pos (6)

Note that the number m behind each factor represents that the factor has m possible values, uniformly sampled in the normalized range of variations [0, 1].

Each image has as filename padded_index.png where

index = lighting_intensity * 46656 + lighting_x-dir * 7776 + lighting_y-dir * 1296 + 
lighting_z-dir * 216 + camera_x-pos * 36 + camera_y-pos * 6 + camera_z-pos

padded_index = index padded with zeros such that it has 6 digits.

To see the Falcor3D images by varying each factor of variation individually, you can run

python dataset_demo.py --dataset Falor3D

and the results are saved in the examples/falcor3d_samples folder.

You can also check out the Falcor3D images here: falcor3d_samples_demo, which includes all the ground-truth latent traversals.

Isaac3D

The Isaac3D dataset consists of 737,280 images, based on the 3D scene of a kitchen, where each image has a resolution of 512x512. The meta code corresponds to all possible combinations of 9 factors of variation:

  • object_shape (3)
  • object_scale (4)
  • camera_height (4)
  • robot_x-movement (8)
  • robot_y-movement (5)
  • lighting_intensity (4)
  • lighting_y-dir (6)
  • object_color (4)
  • wall_color (4)

Similarly, the number m behind each factor represents that the factor has m possible values, uniformly sampled in the normalized range of variations [0, 1].

Each image has as filename padded_index.png where

index = object_shape * 245760 + object_scale * 30720 + camera_height * 6144 + 
robot_x-movement * 1536 + robot_y-movement * 384 + lighting_intensity * 96 + 
lighting_y-dir * 16 + object_color * 4 + wall color

padded_index = index padded with zeros such that it has 6 digits.

To see the Isaac3D images by varying each factor of variation individually, you can run

python dataset_demo.py --dataset Isaac3D

and the results are saved in the examples/isaac3d_samples folder.

You can also check out the Isaac3D images here: isaac3d_samples_demo, which includes all the ground-truth latent traversals.

Links to datasets

The two datasets can be downloaded from Google Drive:

  • Falcor3D (98 GB): link
  • Isaac3D (190 GB): link

Besides, we also provide a downsampled version (resolution 128x128) of the two datasets:

  • Falcor3D_128x128 (3.7 GB): link
  • Isaac3D_128x128 (13 GB): link

License

This work is licensed under a Creative Commons Attribution 4.0 International License by NVIDIA Corporation (https://creativecommons.org/licenses/by/4.0/).

Owner
NVIDIA Research Projects
NVIDIA Research Projects
Light-weight network, depth estimation, knowledge distillation, real-time depth estimation, auxiliary data.

light-weight-depth-estimation Boosting Light-Weight Depth Estimation Via Knowledge Distillation, https://arxiv.org/abs/2105.06143 Junjie Hu, Chenyou F

Junjie Hu 13 Dec 10, 2022
Pytorch implement of 'Unmixing based PAN guided fusion network for hyperspectral imagery'

Pgnet There's a improved version compared with the publication in Tgrs with the modification in the deduction of the PDIN block: https://arxiv.org/abs

5 Jul 01, 2022
Code for the IJCAI 2021 paper "Structure Guided Lane Detection"

SGNet Project for the IJCAI 2021 paper "Structure Guided Lane Detection" Abstract Recently, lane detection has made great progress with the rapid deve

Jinming Su 27 Dec 08, 2022
Election Exit Poll Prediction and U.S.A Presidential Speech Analysis using Machine Learning

Machine_Learning Election Exit Poll Prediction and U.S.A Presidential Speech Analysis using Machine Learning This project is based on 2 case-studies:

Avnika Mehta 1 Jan 27, 2022
Projects of Andfun Yangon

AndFunYangon Projects of Andfun Yangon First Commit We can use gsearch.py to sea

Htin Aung Lu 1 Dec 28, 2021
DiffStride: Learning strides in convolutional neural networks

DiffStride is a pooling layer with learnable strides. Unlike strided convolutions, average pooling or max-pooling that require cross-validating stride values at each layer, DiffStride can be initiali

Google Research 113 Dec 13, 2022
Keras community contributions

keras-contrib : Keras community contributions Keras-contrib is deprecated. Use TensorFlow Addons. The future of Keras-contrib: We're migrating to tens

Keras 1.6k Dec 21, 2022
Code of 3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch Feature Swapping for Bodies and Faces

3D Shape Variational Autoencoder Latent Disentanglement via Mini-Batch Feature Swapping for Bodies and Faces Installation After cloning the repo open

37 Dec 03, 2022
DeepCAD: A Deep Generative Network for Computer-Aided Design Models

DeepCAD This repository provides source code for our paper: DeepCAD: A Deep Generative Network for Computer-Aided Design Models Rundi Wu, Chang Xiao,

Rundi Wu 85 Dec 31, 2022
Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR

Official implementation for paper "Advancing Self-supervised Monocular Depth Learning with Sparse LiDAR"

Ziyue Feng 72 Dec 09, 2022
A collection of random and hastily hacked together scripts for investigating EU-DCC

A collection of random and hastily hacked together scripts for investigating EU-DCC

Ryan Barrett 8 Mar 01, 2022
The repository contains source code and models to use PixelNet architecture used for various pixel-level tasks. More details can be accessed at .

PixelNet: Representation of the pixels, by the pixels, and for the pixels. We explore design principles for general pixel-level prediction problems, f

Aayush Bansal 196 Aug 10, 2022
Weak-supervised Visual Geo-localization via Attention-based Knowledge Distillation

Weak-supervised Visual Geo-localization via Attention-based Knowledge Distillation Introduction WAKD is a PyTorch implementation for our ICPR-2022 pap

2 Oct 20, 2022
Deep learning model for EEG artifact removal

DeepSeparator Introduction Electroencephalogram (EEG) recordings are often contaminated with artifacts. Various methods have been developed to elimina

23 Dec 21, 2022
[Preprint] ConvMLP: Hierarchical Convolutional MLPs for Vision, 2021

Convolutional MLP ConvMLP: Hierarchical Convolutional MLPs for Vision Preprint link: ConvMLP: Hierarchical Convolutional MLPs for Vision By Jiachen Li

SHI Lab 143 Jan 03, 2023
PyTorch Implementation of AnimeGANv2

PyTorch implementation of AnimeGANv2

4k Jan 07, 2023
Evaluating AlexNet features at various depths

Linear Separability Evaluation This repo provides the scripts to test a learned AlexNet's feature representation performance at the five different con

Yuki M. Asano 32 Dec 30, 2022
Calling Julia from Python - an experiment on data loading

Calling Julia from Python - an experiment on data loading See the slides. TLDR After reading Patrick's blog post, we decided to try to replace C++ wit

Abel Siqueira 8 Jun 07, 2022
PyTorch implementation of Rethinking Positional Encoding in Language Pre-training

TUPE PyTorch implementation of Rethinking Positional Encoding in Language Pre-training. Quickstart Clone this repository. git clone https://github.com

Jake Tae 5 Jan 27, 2022
The tl;dr on a few notable transformer/language model papers + other papers (alignment, memorization, etc).

The tl;dr on a few notable transformer/language model papers + other papers (alignment, memorization, etc).

Will Thompson 166 Jan 04, 2023