Beyond imagenet attack (accepted by ICLR 2022) towards crafting adversarial examples for black-box domains.

Overview

Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains (ICLR'2022)

This is the Pytorch code for our paper Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains). In this paper, with only the knowledge of the ImageNet domain, we propose a Beyond ImageNet Attack (BIA) to investigate the transferability towards black-box domains (unknown classification tasks).

Requirement

  • Python 3.7
  • Pytorch 1.8.0
  • torchvision 0.9.0
  • numpy 1.20.2
  • scipy 1.7.0
  • pandas 1.3.0
  • opencv-python 4.5.2.54
  • joblib 0.14.1
  • Pillow 6.1

Dataset

images

  • Download the ImageNet training dataset.

  • Download the testing dataset.

Note: After downloading CUB-200-2011, Standford Cars and FGVC Aircraft, you should set the "self.rawdata_root" (DCL_finegrained/config.py: lines 59-75) to your saved path.

Target model

The checkpoint of target model should be put into model folder.

  • CUB-200-2011, Stanford Cars and FGVC AirCraft can be downloaded from here.
  • CIFAR-10, CIFAR-100, STL-10 and SVHN can be automatically downloaded.
  • ImageNet pre-trained models are available at torchvision.

Pretrained-Generators

framework Adversarial generators are trained against following four ImageNet pre-trained models.

  • VGG19
  • VGG16
  • ResNet152
  • DenseNet169

After finishing training, the resulting generator will be put into saved_models folder. You can also download our pretrained-generator from here.

Train

Train the generator using vanilla BIA (RN: False, DA: False)

python train.py --model_type vgg16 --train_dir your_imagenet_path --RN False --DA False

your_imagenet_path is the path where you download the imagenet training set.

Evaluation

Evaluate the performance of vanilla BIA (RN: False, DA: False)

python eval.py --model_type vgg16 --RN False --DA False

Citing this work

If you find this work is useful in your research, please consider citing:

@inproceedings{Zhang2022BIA,
  author    = {Qilong Zhang and
               Xiaodan Li and
               Yuefeng Chen and
               Jingkuan Song and
               Lianli Gao and
               Yuan He and
               Hui Xue},
  title     = {Beyond ImageNet Attack: Towards Crafting Adversarial Examples for Black-box Domains},
  Booktitle = {International Conference on Learning Representations},
  year      = {2022}
}

Acknowledge

Thank @aaron-xichen and @Muzammal-Naseer for sharing their codes.

You might also like...
This repository contains the code and models necessary to replicate the results of paper:  How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

This repository contains the code and models necessary to replicate the results of paper:  How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras (ICCV 2021)
Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras (ICCV 2021)

N-ImageNet: Towards Robust, Fine-Grained Object Recognition with Event Cameras Official PyTorch implementation of N-ImageNet: Towards Robust, Fine-Gra

[ICLR 2022] Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators
[ICLR 2022] Pretraining Text Encoders with Adversarial Mixture of Training Signal Generators

AMOS This repository contains the scripts for fine-tuning AMOS pretrained models on GLUE and SQuAD 2.0 benchmarks. Paper: Pretraining Text Encoders wi

Iterative Normalization: Beyond Standardization towards Efficient Whitening

IterNorm Code for reproducing the results in the following paper: Iterative Normalization: Beyond Standardization towards Efficient Whitening Lei Huan

Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pytorch. Idea proposed and accepted at ICLR 2021
Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pytorch. Idea proposed and accepted at ICLR 2021

Geometric Vector Perceptron Implementation of Geometric Vector Perceptron, a simple circuit with 3d rotation equivariance for learning over large biom

Seach Losses of our paper 'Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search', accepted by ICLR 2021.
Seach Losses of our paper 'Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search', accepted by ICLR 2021.

CSE-Autoloss Designing proper loss functions for vision tasks has been a long-standing research direction to advance the capability of existing models

This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.
This project is the official implementation of our accepted ICLR 2021 paper BiPointNet: Binary Neural Network for Point Clouds.

BiPointNet: Binary Neural Network for Point Clouds Created by Haotong Qin, Zhongang Cai, Mingyuan Zhang, Yifu Ding, Haiyu Zhao, Shuai Yi, Xianglong Li

A Research-oriented Federated Learning Library and Benchmark Platform for Graph Neural Networks. Accepted to ICLR'2021 - DPML and MLSys'21 - GNNSys workshops.

FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks A Research-oriented Federated Learning Library and Benchmark Platform

Comments
  • About the comparative methods

    About the comparative methods

    Thank you for your insightful work! In Table3, I want to know that how to perform PGD or DIM on CUB with source models pretrained on ImageNet. Thank you~

    opened by lwmming 6
  • cursor already registered in Tk_GetCursor Aborted (core dumped)

    cursor already registered in Tk_GetCursor Aborted (core dumped)

    python train.py --model_type vgg16 --RN False --DA False

    I tried the above default training, but the error occurred at the end of the batch (epoch 1) training. Can you help debug this please?

    opened by hoonsyang 2
  • missing file

    missing file

    https://github.com/Alibaba-AAIG/Beyond-ImageNet-Attack/blob/7e8b1b8ec5728ebc01723f2c444bf2d5275ee7be/DCL_finegrained/LoadModel.py#L6 NameError: name 'pretrainedmodels' is not defined`

    opened by nkv1995 2
  • when computing cosine similarity

    when computing cosine similarity

    Hi! this is more of a question for the elegant work you have here but less of an issue.

    So when you take cosine similarity (which is to be decreased during training) between two feature maps, you take,

    loss = torch.cosine_similarity((adv_out_slice*attention).reshape(adv_out_slice.shape[0], -1), 
                                (img_out_slice*attention).reshape(img_out_slice.shape[0], -1)).mean()
    

    and that's to compare two flatten vectors, each of which is the flattened feature maps of size (N feature channels, width, height).

    I wonder why not comparing the flattened feature maps with respect to each channel, and then take the average across channels? To me, you're comparing two vectors that are (Nwidthheight)-dimensional, which is not so straightforward to me. Thanks in advance for any intuition behind!

    opened by juliuswang0728 1
Releases(pretrained_models)
Owner
Alibaba-AAIG
Alibaba Artificial Intelligence Governance Laboratory
Alibaba-AAIG
an implementation of 3D Ken Burns Effect from a Single Image using PyTorch

3d-ken-burns This is a reference implementation of 3D Ken Burns Effect from a Single Image [1] using PyTorch. Given a single input image, it animates

Simon Niklaus 1.4k Dec 28, 2022
Qt-GUI implementation of the YOLOv5 algorithm (ver.6 and ver.5)

YOLOv5-GUI 🎉 YOLOv5算法(ver.6及ver.5)的Qt-GUI实现 🎉 Qt-GUI implementation of the YOLOv5 algorithm (ver.6 and ver.5). 基于YOLOv5的v5版本和v6版本及Javacr大佬的UI逻辑进行编写

EricFang 12 Dec 28, 2022
ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators

ELECTRA Introduction ELECTRA is a method for self-supervised language representation learning. It can be used to pre-train transformer networks using

Google Research 2.1k Dec 28, 2022
Code for ICCV2021 paper PARE: Part Attention Regressor for 3D Human Body Estimation

PARE: Part Attention Regressor for 3D Human Body Estimation [ICCV 2021] PARE: Part Attention Regressor for 3D Human Body Estimation, Muhammed Kocabas,

Muhammed Kocabas 277 Jan 03, 2023
Guiding evolutionary strategies by (inaccurate) differentiable robot simulators @ NeurIPS, 4th Robot Learning Workshop

Guiding Evolutionary Strategies by Differentiable Robot Simulators In recent years, Evolutionary Strategies were actively explored in robotic tasks fo

Vladislav Kurenkov 4 Dec 14, 2021
Picasso: a methods for embedding points in 2D in a way that respects distances while fitting a user-specified shape.

Picasso Code to generate Picasso embeddings of any input matrix. Picasso maps the points of an input matrix to user-defined, n-dimensional shape coord

Pachter Lab 45 Dec 23, 2022
A python library for highly configurable transformers - easing model architecture search and experimentation.

A python library for highly configurable transformers - easing model architecture search and experimentation.

Anthony Fuller 51 Nov 20, 2022
This repository implements and evaluates convolutional networks on the Möbius strip as toy model instantiations of Coordinate Independent Convolutional Networks.

Orientation independent Möbius CNNs This repository implements and evaluates convolutional networks on the Möbius strip as toy model instantiations of

Maurice Weiler 59 Dec 09, 2022
SEOVER: Sentence-level Emotion Orientation Vector based Conversation Emotion Recognition Model

SEOVER-Master This code is the implementation of paper: SEOVER: Sentence-level Emotion Orientation Vector based Conversation Emotion Recognition Model

4 Feb 24, 2022
RL-GAN: Transfer Learning for Related Reinforcement Learning Tasks via Image-to-Image Translation

RL-GAN: Transfer Learning for Related Reinforcement Learning Tasks via Image-to-Image Translation RL-GAN is an official implementation of the paper: T

42 Nov 10, 2022
Code for "Causal autoregressive flows" - AISTATS, 2021

Code for "Causal Autoregressive Flow" This repository contains code to run and reproduce experiments presented in Causal Autoregressive Flows, present

Ricardo Pio Monti 35 Dec 16, 2022
Practical Single-Image Super-Resolution Using Look-Up Table

Practical Single-Image Super-Resolution Using Look-Up Table [Paper] Dependency Python 3.6 PyTorch glob numpy pillow tqdm tensorboardx 1. Training deep

Younghyun Jo 116 Dec 23, 2022
Acute ischemic stroke dataset

AISD Acute ischemic stroke dataset contains 397 Non-Contrast-enhanced CT (NCCT) scans of acute ischemic stroke with the interval from symptom onset to

Kongming Liang 21 Sep 06, 2022
Code for NeurIPS 2020 article "Contrastive learning of global and local features for medical image segmentation with limited annotations"

Contrastive learning of global and local features for medical image segmentation with limited annotations The code is for the article "Contrastive lea

Krishna Chaitanya 152 Dec 22, 2022
NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling @ INTERSPEECH 2021 Accepted

NU-Wave — Official PyTorch Implementation NU-Wave: A Diffusion Probabilistic Model for Neural Audio Upsampling Junhyeok Lee, Seungu Han @ MINDsLab Inc

MINDs Lab 242 Dec 23, 2022
[ACL 2022] LinkBERT: A Knowledgeable Language Model 😎 Pretrained with Document Links

LinkBERT: A Knowledgeable Language Model Pretrained with Document Links This repo provides the model, code & data of our paper: LinkBERT: Pretraining

Michihiro Yasunaga 264 Jan 01, 2023
PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation

PointNet: Deep Learning on Point Sets for 3D Classification and Segmentation Created by Charles R. Qi, Hao Su, Kaichun Mo, Leonidas J. Guibas from Sta

Charles R. Qi 4k Dec 30, 2022
A PyTorch Implementation of Single Shot Scale-invariant Face Detector.

S³FD: Single Shot Scale-invariant Face Detector A PyTorch Implementation of Single Shot Scale-invariant Face Detector. Eval python wider_eval_pytorch.

carwin 235 Jan 07, 2023
Code for the paper "Zero-shot Natural Language Video Localization" (ICCV2021, Oral).

Zero-shot Natural Language Video Localization (ZSNLVL) by Pseudo-Supervised Video Localization (PSVL) This repository is for Zero-shot Natural Languag

Computer Vision Lab. @ GIST 37 Dec 27, 2022
Defending graph neural networks against adversarial attacks (NeurIPS 2020)

GNNGuard: Defending Graph Neural Networks against Adversarial Attacks Authors: Xiang Zhang ( Zitnik Lab @ Harvard 44 Dec 07, 2022