Deep Illuminator is a data augmentation tool designed for image relighting. It can be used to easily and efficiently generate a wide range of illumination variants of a single image.

Overview

Deep Illuminator

Deep Illuminator is a data augmentation tool designed for image relighting. It can be used to easily and efficiently generate a wide range of illumination variants of a single image. It has been tested with several datasets and models and has been shown to succesfully improve performance. It has a built in visualizer created with Streamlit to preview how the target image can be relit. This tool has an accompanying paper.

Example Augmentations

Usage

The simplest method to use this tool is through Docker Hub:

docker pull kartvel/deep-illuminator

Visualizer

Once you have the Deep Illuminator image run the following command to launch the visualizer:

docker run -it --rm  --gpus all \
-p 8501:8501 --entrypoint streamlit \ 
kartvel/deep-illuminator run streamlit/streamlit_app.py

You will be able to interact with it on localhost:8501. Note: If you do not have NVIDIA gpu support enabled for docker simply remove the --gpus all option.

Generating Variants

It is possible to quickly generate multiple variants for images contained in a directory by using the following command:

docker run -it --rm --gpus all \                                                                                               ─╯
-v /path/to/input/images:/app/probe_relighting/originals \
-v /path/to/save/directory:/app/probe_relighting/output \
kartvel/deep-illuminator --[options]

Options

Option Values Description
mode ['synthetic', 'mid'] Selecting the style of probes used as a relighting guide.
step int Increment for the granularity of relighted images. max mid: 24, max synthetic: 360

Buidling Docker image or running without a container

Please read the following for other options: instructions

Benchmarks

Improved performance of R2D2 for [email protected] on HPatches

Training Dataset Overall Viewpoint Illumination
COCO - Original 71.0 65.4 77.1
COCO - Augmented 72.2 (+1.7%) 65.7 (+0.4%) 79.2 (+2.7%)
VIDIT - Original 66.7 60.5 73.4
VIDIT - Augmented 69.2 (+3.8%) 60.9 (+0.6%) 78.1 (+6.4%)
Aachen - Original 69.4 64.1 75.0
Aachen - Augmented 72.6 (+4.6%) 66.1 (+3.1%) 79.6 (+6.1%)

Improved performance of R2D2 for the Long-Term Visual Localization challenge on Aachen v1.1

Training Dataset 0.25m, 2° 0.5m, 5° 5m, 10°
COCO - Original 62.3 77.0 79.5
COCO - Augmented 65.4 (+5.0%) 83.8 (+8.8%) 92.7 (+16%)
VIDIT - Original 40.8 53.4 61.3
VIDIT - Augmented 53.9 (+32%) 71.2 (+33%) 83.2(+36%)
Aachen - Original 60.7 72.8 83.8
Aachen - Augmented 63.4 (+4.4%) 81.7 (+12%) 92.1 (+9.9%)

Acknowledgment

The developpement of the VAE for the visualizer was made possible by the PyTorch-VAE repository.

Bibtex

If you use this code in your project, please consider citing the following paper:

@misc{chogovadze2021controllable,
      title={Controllable Data Augmentation Through Deep Relighting}, 
      author={George Chogovadze and Rémi Pautrat and Marc Pollefeys},
      year={2021},
      eprint={2110.13996},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
A study project using the AA-RMVSNet to reconstruct buildings from multiple images

3d-building-reconstruction This is part of a study project using the AA-RMVSNet to reconstruct buildings from multiple images. Introduction It is exci

17 Oct 17, 2022
The ARCA23K baseline system

ARCA23K Baseline System This is the source code for the baseline system associated with the ARCA23K dataset. Details about ARCA23K and the baseline sy

4 Jul 02, 2022
The source code of CVPR 2019 paper "Deep Exemplar-based Video Colorization".

Deep Exemplar-based Video Colorization (Pytorch Implementation) Paper | Pretrained Model | Youtube video 🔥 | Colab demo Deep Exemplar-based Video Col

Bo Zhang 253 Dec 27, 2022
Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)

Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)

Yihui He 1k Jan 03, 2023
The official implementation of CSG-Stump: A Learning Friendly CSG-Like Representation for Interpretable Shape Parsing

CSGStumpNet The official implementation of CSG-Stump: A Learning Friendly CSG-Like Representation for Interpretable Shape Parsing Paper | Project page

Daxuan 39 Dec 26, 2022
This is the workbook I created while I was studying for the Qiskit Associate Developer exam. I hope this becomes useful to others as it was for me :)

A Workbook for the Qiskit Developer Certification Exam Hello everyone! This is Bartu, a fellow Qiskitter. I have recently taken the Certification exam

Bartu Bisgin 66 Dec 10, 2022
Garbage classification using structure data.

垃圾分类模型使用说明 1.包含以下数据文件 文件 描述 data/MaterialMapping.csv 物体以及其归类的信息 data/TestRecords 光谱原始测试数据 CSV 文件 data/TestRecordDesc.zip CSV 文件描述文件 data/Boundaries.cs

wenqi 1 Dec 10, 2021
Research code for Arxiv paper "Camera Motion Agnostic 3D Human Pose Estimation"

GMR(Camera Motion Agnostic 3D Human Pose Estimation) This repo provides the source code of our arXiv paper: Seong Hyun Kim, Sunwon Jeong, Sungbum Park

Seong Hyun Kim 1 Feb 07, 2022
PyTorch implementation of our Adam-NSCL algorithm from our CVPR2021 (oral) paper "Training Networks in Null Space for Continual Learning"

Adam-NSCL This is a PyTorch implementation of Adam-NSCL algorithm for continual learning from our CVPR2021 (oral) paper: Title: Training Networks in N

Shipeng Wang 34 Dec 21, 2022
TargetAllDomainObjects - A python wrapper to run a command on against all users/computers/DCs of a Windows Domain

TargetAllDomainObjects A python wrapper to run a command on against all users/co

Podalirius 19 Dec 13, 2022
CondenseNet V2: Sparse Feature Reactivation for Deep Networks

CondenseNetV2 This repository is the official Pytorch implementation for "CondenseNet V2: Sparse Feature Reactivation for Deep Networks" paper by Le Y

Haojun Jiang 74 Dec 12, 2022
A library for low-memory inferencing in PyTorch.

Pylomin Pylomin (PYtorch LOw-Memory INference) is a library for low-memory inferencing in PyTorch. Installation ... Usage For example, the following c

3 Oct 26, 2022
Code and Datasets from the paper "Self-supervised contrastive learning for volcanic unrest detection from InSAR data"

Code and Datasets from the paper "Self-supervised contrastive learning for volcanic unrest detection from InSAR data" You can download the pretrained

Bountos Nikos 3 May 07, 2022
PSTR: End-to-End One-Step Person Search With Transformers (CVPR2022)

PSTR (CVPR2022) This code is an official implementation of "PSTR: End-to-End One-Step Person Search With Transformers (CVPR2022)". End-to-end one-step

Jiale Cao 28 Dec 13, 2022
Fully Convolutional DenseNet (A.K.A 100 layer tiramisu) for semantic segmentation of images implemented in TensorFlow.

FC-DenseNet-Tensorflow This is a re-implementation of the 100 layer tiramisu, technically a fully convolutional DenseNet, in TensorFlow (Tiramisu). Th

Hasnain Raza 121 Oct 12, 2022
Code samples for my book "Neural Networks and Deep Learning"

Code samples for "Neural Networks and Deep Learning" This repository contains code samples for my book on "Neural Networks and Deep Learning". The cod

Michael Nielsen 13.9k Dec 26, 2022
Flappy bird automation using Neuroevolution of Augmenting Topologies (NEAT) in Python

FlappyAI Flappy bird automation using Neuroevolution of Augmenting Topologies (NEAT) in Python Everything Used Genetic Algorithm especially NEAT conce

Eryawan Presma Y. 2 Mar 24, 2022
Yet another video caption

Yet another video caption

Fan Zhimin 5 May 26, 2022
Official PyTorch implementation of Less is More: Pay Less Attention in Vision Transformers.

Less is More: Pay Less Attention in Vision Transformers Official PyTorch implementation of Less is More: Pay Less Attention in Vision Transformers. By

73 Jan 01, 2023
FinRL­-Meta: A Universe for Data­-Driven Financial Reinforcement Learning. 🔥

FinRL-Meta: A Universe of Market Environments. FinRL-Meta is a universe of market environments for data-driven financial reinforcement learning. Users

AI4Finance Foundation 543 Jan 08, 2023