Official Pytorch Implementation of Adversarial Instance Augmentation for Building Change Detection in Remote Sensing Images.

Overview

IAug_CDNet

Official Implementation of Adversarial Instance Augmentation for Building Change Detection in Remote Sensing Images.

Overview

We propose a novel data-level solution, namely Instance-level change Augmentation (IAug), to generate bi-temporal images that contain changes involving plenty and diverse buildings by leveraging generative adversarial training. The key of IAug is to blend synthesized building instances onto appropriate positions of one of the bi-temporal images. To achieve this, a building generator is employed to produce realistic building images that are consistent with the given layouts. Diverse styles are later transferred onto the generated images. We further propose context-aware blending for a realistic composite of the building and the background. We augment the existing CD datasets and also design a simple yet effective CD model - CDNet. Our method (CDNet + IAug) has achieved state-of-the-art results in two building CD datasets (LEVIR-CD and WHU-CD). Interestingly, we achieve comparable results with only 20% of the training data as the current state-of-the-art methods using 100% data. Extensive experiments have validated the effectiveness of the proposed IAug. Our augmented dataset has a lower risk of class imbalance than the original one. Conventional learning on the synthesized dataset outperforms several popular cost-sensitive algorithms on the original dataset.

Building Generator

See building generator for details.

Synthesized images (256 * 256) by the generator (trained on the AIRS building dataset).syn_example_airs

Synthesized images (64 * 64) by the generator (trained on the Inria building dataset).syn_example_inria

Installation

This code requires PyTorch 1.0 and python 3+. Please install dependencies by

pip install -r requirements.txt

Generating Images Using Pretrained Model

Once the dataset is ready, the result images can be generated using pretrained models.

  1. Download the tar of the pretrained models from the Google Drive

  2. Generate images using the pretrained model.

    python test.py --model pix2pix --name $pretrained_folder --results_dir $results_dir --dataset_mode custom --label_dir $label_dir --label_nc 2 --batchSize $batchSize --load_size $size --crop_size $size --no_instance --which_epoch lastest

    pretrained_folder is the directory name of the checkpoint file downloaded in Step 1, results_dir is the directory name to save the synthesized images, label_dir is the directory name of the semantic labels, size is the size of the label map fed to the generator.

  3. The outputs images are stored at results_dir. You can view them using the autogenerated HTML file in the directory.

For simplicity, we also provide the test script in scripts/run_test.sh, one can modify the label_dir and name and then run the script.

Training New Models

New models can be trained with the following commands.

  1. Prepare the dataset. You can first prepare the building image patches and corresponding label maps in two folders (image_dir, label_dir).

  2. Train the model.

# To train on your own custom dataset
python train.py --name [experiment_name] --dataset_mode custom --label_dir [label_dir] -- image_dir [image_dir] --label_nc 2

There are many options you can specify. Please use python train.py --help. The specified options are printed to the console. To specify the number of GPUs to utilize, use --gpu_ids. If you want to use the second and third GPUs for example, use --gpu_ids 1,2.

To log training, use --tf_log for Tensorboard. The logs are stored at [checkpoints_dir]/[name]/logs.

Acknowledge

This code borrows heavily from spade.

Color Transfer

See Color Transfer for deteils.

We resort to a simple yet effective nonlearning approach to match the color distribution of the two image sets (GAN-generated images and original images in the change detection dataset).

color_transfer

Requirements

  • Matlab

Usage

We provide two demos to show the color transfer.

When you do not have the object mask. You can edit the file ColorTransferDemo.m, modify the file path of the Im_target and Im_source. After you run this file, the transfered image is saved as result_Image.jpg.

When you do have both the building image and the object mask. You can edit the file ColorTransferDemo_with_mask.m, modify the file path of the Im_target, Im_source, m_target and m_source. After you run this file, the transfered image is saved as result_Image.jpg.

Acknowledge

This code borrows heavily from https://github.com/AissamDjahnine/ColorTransfer.

Shadow Extraction

We show a simple shadow extraction method. The extracted shadow information can be used to make a more realistic image composite in the latter process.

shadow_extraction

We provide some examples for shadow extraction. The samples are in the folder samples\shadow_sample.

Usage

You can edit the file extract_shadow.py and modify the path of the image_folder, label_folder and out_folder. Make sure that the image files are in image_folder and the corresponding label files are in label_folder. Run the following script:

python extract_shadow.py

Once you have successfully run the python file, the results can be found in the out folder.

Instance augmentation

Here, we provide the python implementation of instance augmentation.

image-20210413152845314

We provide some examples for instance augmentation. The samples are in the folder samples\SYN_CD.

Usage

You can edit the file composite_CD_sample.py and modify the following values:

#  first define the some paths
A_folder = r'samples\LEVIR\A'
B_folder = r'samples\LEVIR\B'
L_folder = r'samples\LEVIR\label'
ref_folder = r'samples\LEVIR\ref'
#  instance path
src_folder = r'samples\SYN_CD\image' #test
label_folder = r'samples\SYN_CD\shadow'  # test
out_folder = r'samples\SYN_CD\out_sample'
os.makedirs(out_folder, exist_ok=True)
# how many instance to paste per sample
M = 50

Then, run the following script:

python composite_CD_sample.py

Once you have successfully run the python file, the results can be found in the out folder.

CDNet

Coming soon~~~~

Citation

If you use this code for your research, please cite our paper:

@Article{chen2021,
    title={Adversarial Instance Augmentation for Building Change Detection in Remote Sensing Images},
    author={Hao Chen, Wenyuan Li and Zhenwei Shi},
    year={2021},
    journal={IEEE Transactions on Geoscience and Remote Sensing},
    volume={},
    number={},
    pages={1-16},
    doi={10.1109/TGRS.2021.3066802}
}
Owner
keep forward
PyTorch Implementation of CycleGAN and SSGAN for Domain Transfer (Minimal)

MNIST-to-SVHN and SVHN-to-MNIST PyTorch Implementation of CycleGAN and Semi-Supervised GAN for Domain Transfer. Prerequites Python 3.5 PyTorch 0.1.12

Yunjey Choi 401 Dec 30, 2022
Implementation of the method described in the Speech Resynthesis from Discrete Disentangled Self-Supervised Representations.

Speech Resynthesis from Discrete Disentangled Self-Supervised Representations Implementation of the method described in the Speech Resynthesis from Di

4 Mar 11, 2022
unet-family: Ultimate version

unet-family: Ultimate version 基于之前my-unet代码,我整理出来了这一份终极版本unet-family,方便其他人阅读。 相比于之前的my-unet代码,代码分类更加规范,有条理 对于clone下来的代码不需要修改各种复杂繁琐的路径问题,直接就可以运行。 并且代码有

2 Sep 19, 2022
Python script to download the celebA-HQ dataset from google drive

download-celebA-HQ Python script to download and create the celebA-HQ dataset. WARNING from the author. I believe this script is broken since a few mo

133 Dec 21, 2022
Autonomous Driving on Curvy Roads without Reliance on Frenet Frame: A Cartesian-based Trajectory Planning Method

C++/ROS Source Codes for "Autonomous Driving on Curvy Roads without Reliance on Frenet Frame: A Cartesian-based Trajectory Planning Method" published in IEEE Trans. Intelligent Transportation Systems

Bai Li 88 Dec 23, 2022
Official implementation of NPMs: Neural Parametric Models for 3D Deformable Shapes - ICCV 2021

NPMs: Neural Parametric Models Project Page | Paper | ArXiv | Video NPMs: Neural Parametric Models for 3D Deformable Shapes Pablo Palafox, Aljaz Bozic

PabloPalafox 109 Nov 22, 2022
🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐

🍀 Pytorch implementation of various Attention Mechanisms, MLP, Re-parameter, Convolution, which is helpful to further understand papers.⭐⭐⭐

xmu-xiaoma66 7.7k Jan 05, 2023
Lama-cleaner: Image inpainting tool powered by LaMa

Lama-cleaner: Image inpainting tool powered by LaMa

Qing 5.8k Jan 05, 2023
Repository for "Improving evidential deep learning via multi-task learning," published in AAAI2022

Improving evidential deep learning via multi task learning It is a repository of AAAI2022 paper, “Improving evidential deep learning via multi-task le

deargen 11 Nov 19, 2022
Free course that takes you from zero to Reinforcement Learning PRO 🦸🏻‍🦸🏽

The Hands-on Reinforcement Learning course 🚀 From zero to HERO 🦸🏻‍🦸🏽 Out of intense complexities, intense simplicities emerge. -- Winston Churchi

Pau Labarta Bajo 260 Dec 28, 2022
Deep Surface Reconstruction from Point Clouds with Visibility Information

Data, code and pretrained models for the paper Deep Surface Reconstruction from Point Clouds with Visibility Information.

Raphael Sulzer 23 Jan 04, 2023
MagFace: A Universal Representation for Face Recognition and Quality Assessment

MagFace MagFace: A Universal Representation for Face Recognition and Quality Assessment in IEEE Conference on Computer Vision and Pattern Recognition

Qiang Meng 523 Jan 05, 2023
Code for the paper "Learning-Augmented Algorithms for Online Steiner Tree"

Learning-Augmented Algorithms for Online Steiner Tree This is the code for the paper "Learning-Augmented Algorithms for Online Steiner Tree". Requirem

0 Dec 09, 2021
This repository contains the code for our paper VDA (public in EMNLP2021 main conference)

Virtual Data Augmentation: A Robust and General Framework for Fine-tuning Pre-trained Models This repository contains the code for our paper VDA (publ

RUCAIBox 13 Aug 06, 2022
Conservative and Adaptive Penalty for Model-Based Safe Reinforcement Learning

Conservative and Adaptive Penalty for Model-Based Safe Reinforcement Learning This is the official repository for Conservative and Adaptive Penalty fo

7 Nov 22, 2022
Learn other languages ​​using artificial intelligence with python.

The main idea of ​​the project is to facilitate the learning of other languages. We created a simple AI that will interact with you. Just ask questions that if she knows, she will answer.

Pedro Rodrigues 2 Jun 07, 2022
Frequency Spectrum Augmentation Consistency for Domain Adaptive Object Detection

Frequency Spectrum Augmentation Consistency for Domain Adaptive Object Detection Main requirements torch = 1.0 torchvision = 0.2.0 Python 3 Environm

15 Apr 04, 2022
Geometry-Free View Synthesis: Transformers and no 3D Priors

Geometry-Free View Synthesis: Transformers and no 3D Priors Geometry-Free View Synthesis: Transformers and no 3D Priors Robin Rombach*, Patrick Esser*

CompVis Heidelberg 293 Dec 22, 2022
You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors

You Only Hypothesize Once: Point Cloud Registration with Rotation-equivariant Descriptors In this paper, we propose a novel local descriptor-based fra

Haiping Wang 80 Dec 15, 2022
《Train in Germany, Test in The USA: Making 3D Object Detectors Generalize》(CVPR 2020)

Train in Germany, Test in The USA: Making 3D Object Detectors Generalize This paper has been accpeted by Conference on Computer Vision and Pattern Rec

Xiangyu Chen 101 Jan 02, 2023