Original code for "Zero-Shot Domain Adaptation with a Physics Prior"

Related tags

Deep LearningCIConv
Overview

Zero-Shot Domain Adaptation with a Physics Prior

[arXiv] [sup. material] - ICCV 2021 Oral paper, by Attila Lengyel, Sourav Garg, Michael Milford and Jan van Gemert.

This repository contains the PyTorch implementation of Color Invariant Convolutions and all experiments and datasets described in the paper.

Abstract

We explore the zero-shot setting for day-night domain adaptation. The traditional domain adaptation setting is to train on one domain and adapt to the target domain by exploiting unlabeled data samples from the test set. As gathering relevant test data is expensive and sometimes even impossible, we remove any reliance on test data imagery and instead exploit a visual inductive prior derived from physics-based reflection models for domain adaptation. We cast a number of color invariant edge detectors as trainable layers in a convolutional neural network and evaluate their robustness to illumination changes. We show that the color invariant layer reduces the day-night distribution shift in feature map activations throughout the network. We demonstrate improved performance for zero-shot day to night domain adaptation on both synthetic as well as natural datasets in various tasks, including classification, segmentation and place recognition.

Getting started

All code and experiments have been tested with PyTorch 1.7.0.

Create a local clone of this repository:

git clone https://github.com/Attila94/CIConv

The method directory contains the color invariant convolution (CIConv) layer, as well as custom ResNet and VGG models using the CIConv layer. To use the CIConv layer in your own architecture, simply copy ciconv2d.py to the desired directory and add it as a regular PyTorch layer as

from ciconv2d import CIConv2d
ciconv = CIConv2d('W', k=3, scale=0.0)

See resnet.py and vgg.py for examples.

Datasets

Shapenet Illuminants

[Download link]

Shapenet Illuminants is used in the synthetic classification experiment. The images are rendered from a subset of the ShapeNet dataset using the physically based renderer Mitsuba. The scene is illuminated by a point light modeled as a black-body radiator with temperatures ranging between [1900, 20000] K and an ambient light source. The training set contains 1,000 samples for each of the 10 object classes recorded under "normal" lighting conditions (T = 6500 K). Multiple test sets with 300 samples per class are rendered for a variety of light source intensities and colors.

shapenet_illuminants

Common Objects Day and Night

[Download link]

Common Objects Day and Night (CODaN) is a natural day-night image classification dataset. More information can be found on the separate Github repository: https://github.com/Attila94/CODaN.

codan

Experiments

1. Synthetic classification

  1. Download [link] and unpack the Shapenet Illuminants dataset.
  2. In your local CIConv clone navigate to experiments/1_synthetic_classification and run
python train.py --root 'path/to/shapenet_illuminants' --hflip --seed 0 --invariant 'W'

This will train a ResNet-18 with the 'W' color invariant from scratch and evaluate on all test sets.

shapenet_illuminants_results

Classification accuracy of ResNet-18 with various color invariants. RGB (not invariant) performance degrades when illumination conditions differ between train and test set, while color invariants remain more stable. W performs best overall.

2. CODaN classification

  1. Download the Common Objects Day and Night (CODaN) dataset from https://github.com/Attila94/CODaN.
  2. In your local CIConv clone navigate to experiments/2_codan_classification and run
python train.py --root 'path/to/codan' --invariant 'W' --scale 0. --hflip --jitter 0.3 --rr 20 --seed 0

This will train a ResNet-18 with the 'W' color invariant from scratch and evaluate on all test sets.

Selected results from the paper:

Method Day (% accuracy) Night (% accuracy)
Baseline 80.39 +- 0.38 48.31 +- 1.33
E 79.79 +- 0.40 49.95 +- 1.60
W 81.49 +- 0.49 59.67 +- 0.93
C 78.04 +- 1.08 53.44 +- 1.28
N 77.44 +- 0.00 52.03 +- 0.27
H 75.20 +- 0.56 50.52 +- 1.34

3. Semantic segmentation

  1. Download and unpack the following public datasets: Cityscapes, Nighttime Driving, Dark Zurich.

  2. In your local CIConv clone navigate to experiments/3_segmentation.

  3. Set the proper dataset locations in train.py.

  4. Run

    python train.py --hflip --rc --jitter 0.3 --scale 0.3 --batch-size 6 --pretrained --invariant 'W'

Selected results from the paper:

Method Nighttime Driving (mIoU) Dark Zurich (mIoU)
RefineNet [baseline] 34.1 30.6
W-RefineNet [ours] 41.6 34.5

4. Visual place recognition

  1. Setup conda environment

    conda create -n ciconv python=3.9 mamba -c conda-forge
    conda activate ciconv
    mamba install pytorch==1.7.1 torchvision==0.8.2 torchaudio==0.7.2 cudatoolkit=10.1 scikit-image -c pytorch
  2. Navigate to experiments/4_visual_place_recognition/cnnimageretrieval-pytorch/.

  3. Run

    git submodule update --init # download a fork of cnnimageretrieval-pytorch
    sh cirtorch/utils/setup_tests.sh # download datasets and pre-trained models 
    python3 -m cirtorch.examples.test --network-path data/networks/retrieval-SfM-120k_w_resnet101_gem/model.path.tar --multiscale '[1, 1/2**(1/2), 1/2]' --datasets '247tokyo1k' --whitening 'retrieval-SfM-120k'
  4. Use --network-path retrievalSfM120k-resnet101-gem to compare against the vanilla method (without using the color invariant trained ResNet101).

  5. Use --datasets 'gp_dl_nr' to test on the GardensPointWalking dataset.

Selected results from the paper:

Method Tokyo 24/7 (mAP)
ResNet101 GeM [baseline] 85.0
W-ResNet101 GeM [ours] 88.3

Citation

If you find this repository useful for your work, please cite as follows:

@article{lengyel2021zeroshot,
      title={Zero-Shot Domain Adaptation with a Physics Prior}, 
      author={Attila Lengyel and Sourav Garg and Michael Milford and Jan C. van Gemert},
      year={2021},
      eprint={2108.05137},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}
Owner
Attila Lengyel
PhD candidate @ TU Delft Computer Vision Lab.
Attila Lengyel
A Transformer-Based Feature Segmentation and Region Alignment Method For UAV-View Geo-Localization

University1652-Baseline [Paper] [Slide] [Explore Drone-view Data] [Explore Satellite-view Data] [Explore Street-view Data] [Video Sample] [中文介绍] This

Zhedong Zheng 335 Jan 06, 2023
Much faster than SORT(Simple Online and Realtime Tracking), a little worse than SORT

QSORT QSORT(Quick + Simple Online and Realtime Tracking) is a simple online and realtime tracking algorithm for 2D multiple object tracking in video s

Yonghye Kwon 8 Jul 27, 2022
TianyuQi 10 Dec 11, 2022
Deep Distributed Control of Port-Hamiltonian Systems

De(e)pendable Distributed Control of Port-Hamiltonian Systems (DeepDisCoPH) This repository is associated to the paper [1] and it contains: The full p

Dependable Control and Decision group - EPFL 3 Aug 17, 2022
Implementation for Shape from Polarization for Complex Scenes in the Wild

sfp-wild Implementation for Shape from Polarization for Complex Scenes in the Wild project website | paper Code and dataset will be released soon. Int

Chenyang LEI 41 Dec 23, 2022
Fast, general, and tested differentiable structured prediction in PyTorch

Fast, general, and tested differentiable structured prediction in PyTorch

HNLP 1.1k Dec 16, 2022
Corgis are the cutest creatures; have 30K of them!

corgi-net This is a dataset of corgi images scraped from the corgi subreddit. After filtering using an ImageNet classifier, the training set consists

Alex Nichol 6 Dec 24, 2022
Anchor-free Oriented Proposal Generator for Object Detection

Anchor-free Oriented Proposal Generator for Object Detection Gong Cheng, Jiabao Wang, Ke Li, Xingxing Xie, Chunbo Lang, Yanqing Yao, Junwei Han, Intro

jbwang1997 56 Nov 15, 2022
GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration

GenGNN: A Generic FPGA Framework for Graph Neural Network Acceleration Stefan Abi-Karam*, Yuqi He*, Rishov Sarkar*, Lakshmi Sathidevi, Zihang Qiao, Co

Sharc-Lab 19 Dec 15, 2022
Diverse Image Captioning with Context-Object Split Latent Spaces (NeurIPS 2020)

Diverse Image Captioning with Context-Object Split Latent Spaces This repository is the PyTorch implementation of the paper: Diverse Image Captioning

Visual Inference Lab @TU Darmstadt 34 Nov 21, 2022
Put blind watermark into a text with python

text_blind_watermark Put blind watermark into a text. Can be used in Wechat dingding ... How to Use install pip install text_blind_watermark Alice Pu

郭飞 164 Dec 30, 2022
This is the official pytorch implementation for the paper: Instance Similarity Learning for Unsupervised Feature Representation.

ISL This is the official pytorch implementation for the paper: Instance Similarity Learning for Unsupervised Feature Representation, which is accepted

19 May 04, 2022
PyTorch CZSL framework containing GQA, the open-world setting, and the CGE and CompCos methods.

Compositional Zero-Shot Learning This is the official PyTorch code of the CVPR 2021 works Learning Graph Embeddings for Compositional Zero-shot Learni

EML Tübingen 70 Dec 27, 2022
Decorator for PyMC3

sampled Decorator for reusable models in PyMC3 Provides syntactic sugar for reusable models with PyMC3. This lets you separate creating a generative m

Colin 50 Oct 08, 2021
A set of tools for creating and testing machine learning features, with a scikit-learn compatible API

Feature Forge This library provides a set of tools that can be useful in many machine learning applications (classification, clustering, regression, e

Machinalis 380 Nov 05, 2022
Awesome Remote Sensing Toolkit based on PaddlePaddle.

基于飞桨框架开发的高性能遥感图像处理开发套件,端到端地完成从训练到部署的全流程遥感深度学习应用。 最新动态 PaddleRS 即将发布alpha版本!欢迎大家试用 简介 PaddleRS是遥感科研院所、相关高校共同基于飞桨开发的遥感处理平台,支持遥感图像分类,目标检测,图像分割,以及变化检测等常用遥

146 Dec 11, 2022
Official code for "Distributed Deep Learning in Open Collaborations" (NeurIPS 2021)

Distributed Deep Learning in Open Collaborations This repository contains the code for the NeurIPS 2021 paper "Distributed Deep Learning in Open Colla

Yandex Research 96 Sep 15, 2022
Pytorch Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic

Pytorch Implementation of Zero-Shot Image-to-Text Generation for Visual-Semantic Arithmetic [Paper] [Colab is coming soon] Approach Example Usage To r

170 Jan 03, 2023
This is just a funny project that we want to see AutoEncoder (AE) can actually work to enhance the features we want

Funny_muscle_enhancer :) 1.Discription: This is just a funny project that we want to see AutoEncoder (AE) can actually work on the some features. We w

Jing-Yao Chen (Jacob) 8 Oct 01, 2022