Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark

Overview

Universal Adversarial Examples in Remote Sensing: Methodology and Benchmark

Yonghao Xu and Pedram Ghamisi


This research has been conducted at the Institute of Advanced Research in Artificial Intelligence (IARAI).

This is the official PyTorch implementation of the black-box adversarial attack methods for remote sensing data in our paper Universal adversarial examples in remote sensing: Methodology and benchmark.

Table of content

  1. Dataset
  2. Supported methods and models
  3. Preparation
  4. Adversarial attacks on scene classification
  5. Adversarial attacks on semantic segmentation
  6. Performance evaluation on the UAE-RS dataset
  7. Paper
  8. Acknowledgement
  9. License

Dataset

We collect the generated universal adversarial examples in the dataset named UAE-RS, which is the first dataset that provides black-box adversarial samples in the remote sensing field.

๐Ÿ“ก Download links:  Google Drive        Baidu NetDisk (Code: 8g1r)

To build UAE-RS, we use the Mixcut-Attack method to attack ResNet18 with 1050 test samples from the UCM dataset and 5000 test samples from the AID dataset for scene classification, and use the Mixup-Attack method to attack FCN-8s with 5 test images from the Vaihingen dataset (image IDs: 11, 15, 28, 30, 34) and 5 test images from the Zurich Summer dataset (image IDs: 16, 17, 18, 19, 20) for semantic segmentation.

Example images in the UCM dataset and the corresponding adversarial examples in the UAE-RS dataset.

Example images in the AID dataset and the corresponding adversarial examples in the UAE-RS dataset.

Qualitative results of the black-box adversarial attacks from FCN-8s โ†’ SegNet on the Vaihingen dataset.

(a) The original clean test images in the Vaihingen dataset. (b) The corresponding adversarial examples in the UAE-RS dataset. (c) Segmentation results of SegNet on the clean images. (d) Segmentation results of SegNet on the adversarial images. (e) Ground-truth annotations.

Supported methods and models

This repo contains implementations of black-box adversarial attacks for remote sensing data on both scene classification and semantic segmentation tasks.

Preparation

  • Package requirements: The scripts in this repo are tested with torch==1.10 and torchvision==0.11 using two NVIDIA Tesla V100 GPUs.
  • Remote sensing datasets used in this repo:
  • Data folder structure
    • The data folder is structured as follows:
โ”œโ”€โ”€ <THE-ROOT-PATH-OF-DATA>/
โ”‚   โ”œโ”€โ”€ UCMerced_LandUse/     
|   |   โ”œโ”€โ”€ Images/
|   |   |   โ”œโ”€โ”€ agricultural/
|   |   |   โ”œโ”€โ”€ airplane/
|   |   |   |โ”€โ”€ ...
โ”‚   โ”œโ”€โ”€ AID/     
|   |   โ”œโ”€โ”€ Airport/
|   |   โ”œโ”€โ”€ BareLand/
|   |   |โ”€โ”€ ...
โ”‚   โ”œโ”€โ”€ Vaihingen/     
|   |   โ”œโ”€โ”€ img/
|   |   โ”œโ”€โ”€ gt/
|   |   โ”œโ”€โ”€ ...
โ”‚   โ”œโ”€โ”€ Zurich/    
|   |   โ”œโ”€โ”€ img/
|   |   โ”œโ”€โ”€ gt/
|   |   โ”œโ”€โ”€ ...
โ”‚   โ”œโ”€โ”€ UAE-RS/    
|   |   โ”œโ”€โ”€ UCM/
|   |   โ”œโ”€โ”€ AID/
|   |   โ”œโ”€โ”€ Vaihingen/
|   |   โ”œโ”€โ”€ Zurich/
  • Pretraining the models for scene classification
CUDA_VISIBLE_DEVICES=0,1 python pretrain_cls.py --network 'alexnet' --dataID 1 --root_dir <THE-ROOT-PATH-OF-DATA>
CUDA_VISIBLE_DEVICES=0,1 python pretrain_cls.py --network 'resnet18' --dataID 1 --root_dir <THE-ROOT-PATH-OF-DATA>
CUDA_VISIBLE_DEVICES=0,1 python pretrain_cls.py --network 'inception' --dataID 1 --root_dir <THE-ROOT-PATH-OF-DATA>
...
  • Pretraining the models for semantic segmentation
cd ./segmentation
CUDA_VISIBLE_DEVICES=0 python pretrain_seg.py --model 'fcn8s' --dataID 1 --root_dir <THE-ROOT-PATH-OF-DATA>
CUDA_VISIBLE_DEVICES=0 python pretrain_seg.py --model 'deeplabv2' --dataID 1 --root_dir <THE-ROOT-PATH-OF-DATA>
CUDA_VISIBLE_DEVICES=0 python pretrain_seg.py --model 'segnet' --dataID 1 --root_dir <THE-ROOT-PATH-OF-DATA>
...

Please replace <THE-ROOT-PATH-OF-DATA> with the local path where you store the remote sensing datasets.

Adversarial attacks on scene classification

  • Generate adversarial examples:
CUDA_VISIBLE_DEVICES=0 python attack_cls.py --surrogate_model 'resnet18' \
                                            --attack_func 'fgsm' \
                                            --dataID 1 \
                                            --root_dir <THE-ROOT-PATH-OF-DATA>
  • Performance evaluation on the adversarial test set:
CUDA_VISIBLE_DEVICES=0 python test_cls.py --surrogate_model 'resnet18' \
                                          --target_model 'inception' \
                                          --attack_func 'fgsm' \
                                          --dataID 1 \
                                          --root_dir <THE-ROOT-PATH-OF-DATA>

You can change parameters --surrogate_model, --attack_func, and --target_model to evaluate the performance with different attacking scenarios.

Adversarial attacks on semantic segmentation

  • Generate adversarial examples:
cd ./segmentation
CUDA_VISIBLE_DEVICES=0 python attack_seg.py --surrogate_model 'fcn8s' \
                                            --attack_func 'fgsm' \
                                            --dataID 1 \
                                            --root_dir <THE-ROOT-PATH-OF-DATA>
  • Performance evaluation on the adversarial test set:
CUDA_VISIBLE_DEVICES=0 python test_seg.py --surrogate_model 'fcn8s' \
                                          --target_model 'segnet' \
                                          --attack_func 'fgsm' \
                                          --dataID 1 \
                                          --root_dir <THE-ROOT-PATH-OF-DATA>

You can change parameters --surrogate_model, --attack_func, and --target_model to evaluate the performance with different attacking scenarios.

Performance evaluation on the UAE-RS dataset

  • Scene classification:
CUDA_VISIBLE_DEVICES=0 python test_cls_uae_rs.py --target_model 'inception' \
                                                 --dataID 1 \
                                                 --root_dir <THE-ROOT-PATH-OF-DATA>

Scene classification results of different deep neural networks on the clean and UAE-RS test sets:

UCM AID
Model Clean Test Set Adversarial Test Set OA Gap Clean Test Set Adversarial Test Set OA Gap
AlexNet 90.28 30.86 -59.42 89.74 18.26 -71.48
VGG11 94.57 26.57 -68.00 91.22 12.62 -78.60
VGG16 93.04 19.52 -73.52 90.00 13.46 -76.54
VGG19 92.85 29.62 -63.23 88.30 15.44 -72.86
Inception-v3 96.28 24.86 -71.42 92.98 23.48 -69.50
ResNet18 95.90 2.95 -92.95 94.76 0.02 -94.74
ResNet50 96.76 25.52 -71.24 92.68 6.20 -86.48
ResNet101 95.80 28.10 -67.70 92.92 9.74 -83.18
ResNeXt50 97.33 26.76 -70.57 93.50 11.78 -81.72
ResNeXt101 97.33 33.52 -63.81 95.46 12.60 -82.86
DenseNet121 97.04 17.14 -79.90 95.50 10.16 -85.34
DenseNet169 97.42 25.90 -71.52 95.54 9.72 -85.82
DenseNet201 97.33 26.38 -70.95 96.30 9.60 -86.70
RegNetX-400MF 94.57 27.33 -67.24 94.38 19.18 -75.20
RegNetX-8GF 97.14 40.76 -56.38 96.22 19.24 -76.98
RegNetX-16GF 97.90 34.86 -63.04 95.84 13.34 -82.50
  • Semantic segmentation:
cd ./segmentation
CUDA_VISIBLE_DEVICES=0 python test_seg_uae_rs.py --target_model 'segnet' \
                                                 --dataID 1 \
                                                 --root_dir <THE-ROOT-PATH-OF-DATA>

Semantic segmentation results of different deep neural networks on the clean and UAE-RS test sets:

Vaihingen Zurich Summer
Model Clean Test Set Adversarial Test Set mF1 Gap Clean Test Set Adversarial Test Set mF1 Gap
FCN-32s 69.48 35.00 -34.48 66.26 32.31 -33.95
FCN-16s 69.70 27.02 -42.68 66.34 34.80 -31.54
FCN-8s 82.22 22.04 -60.18 79.90 40.52 -39.38
DeepLab-v2 77.04 34.12 -42.92 74.38 45.48 -28.90
DeepLab-v3+ 84.36 14.56 -69.80 82.51 62.55 -19.96
SegNet 78.70 17.84 -60.86 75.59 35.58 -40.01
ICNet 80.89 41.00 -39.89 78.87 59.77 -19.10
ContextNet 81.17 47.80 -33.37 77.89 63.71 -14.18
SQNet 81.85 39.08 -42.77 76.32 55.29 -21.03
PSPNet 83.11 21.43 -61.68 77.55 65.39 -12.16
U-Net 83.61 16.09 -67.52 80.78 56.58 -24.20
LinkNet 82.30 24.36 -57.94 79.98 48.67 -31.31
FRRNetA 84.17 16.75 -67.42 80.50 58.20 -22.30
FRRNetB 84.27 28.03 -56.24 79.27 67.31 -11.96

Paper

Universal adversarial examples in remote sensing: Methodology and benchmark

Please cite the following paper if you use the data or the code:

@article{uaers,
  title={Universal adversarial examples in remote sensing: Methodology and benchmark}, 
  author={Xu, Yonghao and Ghamisi, Pedram},
  journal={arXiv preprint arXiv:2202.07054},
  year={2022},
}

Acknowledgement

The authors would like to thank Prof. Shawn Newsam for making the UCM dataset public available, Prof. Gui-Song Xia for providing the AID dataset, the International Society for Photogrammetry and Remote Sensing (ISPRS), and the German Society for Photogrammetry, Remote Sensing and Geoinformation (DGPF) for providing the Vaihingen dataset, and Dr. Michele Volpi for providing the Zurich Summer dataset.

Efficient-Segmentation-Networks

segmentation_models.pytorch

Adversarial-Attacks-PyTorch

License

This repo is distributed under MIT License. The UAE-RS dataset can be used for academic purposes only.

A Light CNN for Deep Face Representation with Noisy Labels

A Light CNN for Deep Face Representation with Noisy Labels Citation If you use our models, please cite the following paper: @article{wulight, title=

Alfred Xiang Wu 715 Nov 05, 2022
Code for CVPR2021 paper "Robust Reflection Removal with Reflection-free Flash-only Cues"

Robust Reflection Removal with Reflection-free Flash-only Cues (RFC) Paper | To be released: Project Page | Video | Data Tensorflow implementation for

Chenyang LEI 162 Jan 05, 2023
Captcha-tensorflow - Image Captcha Solving Using TensorFlow and CNN Model. Accuracy 90%+

Captcha Solving Using TensorFlow Introduction Solve captcha using TensorFlow. Learn CNN and TensorFlow by a practical project. Follow the steps, run t

Jackon Yang 869 Jan 06, 2023
Implementation of Google Brain's WaveGrad high-fidelity vocoder

WaveGrad Implementation (PyTorch) of Google Brain's high-fidelity WaveGrad vocoder (paper). First implementation on GitHub with high-quality generatio

Ivan Vovk 363 Dec 27, 2022
A flexible tool for creating, organizing, and sharing visualizations of live, rich data. Supports Torch and Numpy.

Visdom A flexible tool for creating, organizing, and sharing visualizations of live, rich data. Supports Python. Overview Concepts Setup Usage API To

FOSSASIA 9.4k Jan 07, 2023
Self-supervised learning optimally robust representations for domain generalization.

OptDom: Learning Optimal Representations for Domain Generalization This repository contains the official implementation for Optimal Representations fo

Yangjun Ruan 18 Aug 25, 2022
Real time sign language recognition

The proposed work aims at converting american sign language gestures into English that can be understood by everyone in real time.

Mohit Kaushik 6 Jun 13, 2022
Puzzle-CAM: Improved localization via matching partial and full features.

Puzzle-CAM The official implementation of "Puzzle-CAM: Improved localization via matching partial and full features".

Sanghyun Jo 150 Nov 14, 2022
Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Source Code

Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Trevor Ablett*, Bryan Chan*,

STARS Laboratory 8 Sep 14, 2022
Implementation of the final project of the course DDA6309 Probabilistic Graphical Model

Task-aware Joint CWS and POS (TCwsPos) This is the implementation of the final project of the course DDA6309 Probabilistic Graphical Models, The Chine

Peng 1 Dec 26, 2021
YOLOv5 detection interface - PyQt5 implementation

ๆ‰€ๆœ‰ไปฃ็ ๅทฒไธŠไผ ๏ผŒ็›ดๆŽฅcloneๅŽ๏ผŒ่ฟ่กŒyolo_win.pyๅณๅฏๅผ€ๅฏ็•Œ้ขใ€‚ 2021/9/29๏ผšๅŠ ๅ…ฅ็ฝฎไฟกๅบฆ้€‰ๆ‹ฉ ็•Œ้ขๆ˜ฏๅœจultralytics็š„yolov5ๅŸบ็ก€ไธŠๅปบ็ซ‹็š„๏ผŒ็•Œ้ขไฝฟ็”จpyqt5ๅฎž็Žฐ๏ผŒๅ†…ๅฎน่พƒ็ฎ€ๅ•๏ผŒๅจฑไน่€Œๅทฒใ€‚ ๅŠŸ่ƒฝ๏ผš ๆจกๅž‹้€‰ๆ‹ฉ ๆœฌๅœฐๆ–‡ไปถ้€‰ๆ‹ฉ(่ง†้ข‘ๅ›พ็‰‡ๅ‡ๅฏ) ๅผ€ๅ…ณๆ‘„ๅƒๅคด

487 Dec 27, 2022
Replication Code for "Self-Supervised Bug Detection and Repair" NeurIPS 2021

Self-Supervised Bug Detection and Repair This is the reference code to replicate the research in Self-Supervised Bug Detection and Repair in NeurIPS 2

Microsoft 85 Dec 24, 2022
The project is an official implementation of our paper "3D Human Pose Estimation with Spatial and Temporal Transformers".

3D Human Pose Estimation with Spatial and Temporal Transformers This repo is the official implementation for 3D Human Pose Estimation with Spatial and

Ce Zheng 363 Dec 28, 2022
SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches

SketchEdit: Mask-Free Local Image Manipulation with Partial Sketches [Paper]โ€ƒ [Project Page]โ€ƒ [Interactive Demo]โ€ƒ [Supplementary Material] โ€ƒโ€ƒโ€ƒโ€ƒโ€ƒโ€ƒ Usag

215 Dec 25, 2022
tsflex - feature-extraction benchmarking

tsflex - feature-extraction benchmarking This repository withholds the benchmark results and visualization code of the tsflex paper and toolkit. Flow

PreDiCT.IDLab 5 Mar 25, 2022
gitใ€ŠBeta R-CNN: Looking into Pedestrian Detection from Another Perspectiveใ€‹(NeurIPS 2020) GitHub:[fig3]

Beta R-CNN: Looking into Pedestrian Detection from Another Perspective This is the pytorch implementation of our paper "[Beta R-CNN: Looking into Pede

35 Sep 08, 2021
Randomizes the warps in a stock pokeemerald repo.

pokeemerald warp randomizer Randomizes the warps in a stock pokeemerald repo. Usage Instructions Install networkx and matplotlib via pip3 or similar.

Max Thomas 6 Mar 17, 2022
Implementation of "Fast and Flexible Temporal Point Processes with Triangular Maps" (Oral @ NeurIPS 2020)

Fast and Flexible Temporal Point Processes with Triangular Maps This repository includes a reference implementation of the algorithms described in "Fa

Oleksandr Shchur 20 Dec 02, 2022
Pixel Consensus Voting for Panoptic Segmentation (CVPR 2020)

Implementation for Pixel Consensus Voting (CVPR 2020). This codebase contains the essential ingredients of PCV, including various spatial discretizati

Haochen 23 Oct 25, 2022
The official repository for Deep Image Matting with Flexible Guidance Input

FGI-Matting The official repository for Deep Image Matting with Flexible Guidance Input. Paper: https://arxiv.org/abs/2110.10898 Requirements easydict

Hang Cheng 51 Nov 10, 2022