The official implementation of NeurIPS 2021 paper: Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks

Overview

Introduction

This repository includes the source code for "Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks", which is published in NeurIPS 2021.

Citation

We kindly ask anybody who uses this code to cite the following bibtex:

@inproceedings{
    ma2021finding,
    title={Finding Optimal Tangent Points for Reducing Distortions of Hard-label Attacks},
    author={Chen Ma and Xiangyu Guo and Li Chen and Jun-Hai Yong and Yisen Wang},
    booktitle={Thirty-Fifth Conference on Neural Information Processing Systems},
    year={2021},
    url={https://openreview.net/forum?id=g0wang64Zjd}
}

Structure of Folders and Files

+-- configures
|   |-- HSJA.json  # the hyperparameters setting of HSJA, which is also used in Tangent Attack
+-- dataset
|   |-- dataset_loader_maker.py  # it returns the data loader class that includes 1000 attacks images for the experiments.
|   |-- npz_dataset.py  # it is the dataset class that includes 1000 attacks images for the experiments.
+-- models
|   |-- defensive_model.py # the wrapper of defensive networks (e.g., AT, ComDefend, Feature Scatter), and it converts the input image's pixels to the range of 0 to 1 before feeding.
|   |-- standard_model.py # the wrapper of standard classification networks, and it converts the input image's pixels to the range of 0 to 1 before feeding.
+-- tangent_attack_hemisphere
|   |-- attack.py  # the main class for the attack.
|   |-- tangent_point_analytical_solution.py  # the class for computing the optimal tagent point of the hemisphere.
+-- tangent_attack_semiellipsoid
|   |-- attack.py  # the main class for the attack.
|   |-- tangent_point_analytical_solution.py  # the class for computing the optimal tagent point of the semi-ellipsoid.
+-- cifar_models   # this folder includes the target models of CIFAR-10, i.e., PyramidNet-272, GDAS, WRN-28, and WRN-40 networks.
|-- config.py   # the main configuration of Tangent Attack.
|-- logs  # all the output (logs and result stats files) are located inside this folder
|-- train_pytorch_model  # the pretrained weights of target models
|-- attacked_images  # the 1000 image data for evaluation 

In general, the train_pytorch_model includes the pretrained models' weights, and attacked_images includes the image data, which is packaged into .npz format with pixel range of [0-1].

In the attack, all logs are dumped to logs folder, the statistical results are also written into logs folder, which are .json format.

Attack Command

The following command could run Tangent Attack (TA) and Generalized Tangent Attack (G-TA) on the CIFAR-10 dataset under the untargetd attack's setting:

python tangent_attack_hemisphere/attack.py --gpu 0 --norm l2 --dataset CIFAR-10 --arch resnet-50
python tangent_attack_hemisphere/attack.py --gpu 0 --norm l2 --dataset CIFAR-10 --arch gdas
python tangent_attack_semiellipsoid/attack.py --gpu 0 --norm l2 --dataset CIFAR-10 --arch resnet-50
python tangent_attack_semiellipsoid/attack.py --gpu 0 --norm l2 --dataset CIFAR-10 --arch gdas

Once the attack is running, it directly writes the log into a newly created logs folder. After attacking, the statistical result are also dumped into the same folder, which is named as *.json file.

Also, you can use the following bash shell to run the attack of different models one by one.

./tangent_attack_CIFAR_undefended_models.sh

The commmand of attacks of defense models are presented in tangent_attack_CIFAR_defense_models.sh.

  • The gpu device could be specified by the --gpu device_id argument.
  • the targeted attack can be specified by the --targeted argument. If you want to perform untargeted attack, just don't pass it.
  • the attack of defense models uses --attack_defense --defense_model adv_train/jpeg/com_defend/TRADES argument.

Requirement

Our code is tested on the following environment (probably also works on other environments without many changes):

  • Ubuntu 18.04
  • Python 3.7.3
  • CUDA 11.1
  • CUDNN 8.0.4
  • PyTorch 1.7.1
  • torchvision 0.8.2
  • numpy 1.18.0
  • pretrainedmodels 0.7.4
  • bidict 0.18.0
  • advertorch 0.1.5
  • glog 0.3.1

You can just type pip install -r requirements.txt to install packages.

Download Files of Running Results and Logs

I have uploaded all the logs and results with the compressed zip file format onto this google drive link so that you can download them.

Owner
machen
machen
Code, Data and Demo for Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting

InversePrompting Paper: Controllable Generation from Pre-trained Language Models via Inverse Prompting Code: The code is provided in the "chinese_ip"

THUDM 101 Dec 16, 2022
LEDNet: A Lightweight Encoder-Decoder Network for Real-time Semantic Segmentation

LEDNet: A Lightweight Encoder-Decoder Network for Real-time Semantic Segmentation Table of Contents: Introduction Project Structure Installation Datas

Yu Wang 492 Dec 02, 2022
Equivariant Imaging: Learning Beyond the Range Space

[Project] Equivariant Imaging: Learning Beyond the Range Space Project about the

Georges Le Bellier 3 Feb 06, 2022
Registration Loss Learning for Deep Probabilistic Point Set Registration

RLLReg This repository contains a Pytorch implementation of the point set registration method RLLReg. Details about the method can be found in the 3DV

Felix Järemo Lawin 35 Nov 02, 2022
Improving Convolutional Networks via Attention Transfer (ICLR 2017)

Attention Transfer PyTorch code for "Paying More Attention to Attention: Improving the Performance of Convolutional Neural Networks via Attention Tran

Sergey Zagoruyko 1.4k Dec 23, 2022
The repository for our EMNLP 2021 paper "Finnish Dialect Identification: The Effect of Audio and Text"

Finnish Dialect Identification The repository for our EMNLP 2021 paper "Finnish Dialect Identification: The Effect of Audio and Text". We present a te

Rootroo Ltd 2 Dec 25, 2021
Unofficial & improved implementation of NeRF--: Neural Radiance Fields Without Known Camera Parameters

[Unofficial code-base] NeRF--: Neural Radiance Fields Without Known Camera Parameters [ Project | Paper | Official code base ] ⬅️ Thanks the original

Jianfei Guo 239 Dec 22, 2022
This reporistory contains the test-dev data of the paper "xGQA: Cross-lingual Visual Question Answering".

This reporistory contains the test-dev data of the paper "xGQA: Cross-lingual Visual Question Answering".

AdapterHub 18 Dec 09, 2022
Implementation of the Chamfer Distance as a module for pyTorch

Chamfer Distance for pyTorch This is an implementation of the Chamfer Distance as a module for pyTorch. It is written as a custom C++/CUDA extension.

Christian Diller 205 Jan 05, 2023
Simple reference implementation of GraphSAGE.

Reference PyTorch GraphSAGE Implementation Author: William L. Hamilton Basic reference PyTorch implementation of GraphSAGE. This reference implementat

William L Hamilton 861 Jan 06, 2023
JupyterNotebook - C/C++, Javascript, HTML, LaTex, Shell scripts in Jupyter Notebook Also run them on remote computer

JupyterNotebook Read, write and execute C, C++, Javascript, Shell scripts, HTML, LaTex in jupyter notebook, And also execute them on remote computer R

1 Jan 09, 2022
Automatic 2D-to-3D Video Conversion with CNNs

Deep3D: Automatic 2D-to-3D Video Conversion with CNNs How To Run To run this code. Please install MXNet following the official document. Deep3D requir

Eric Junyuan Xie 1.2k Dec 30, 2022
High performance Cross-platform Inference-engine, you could run Anakin on x86-cpu,arm, nv-gpu, amd-gpu,bitmain and cambricon devices.

Anakin2.0 Welcome to the Anakin GitHub. Anakin is a cross-platform, high-performance inference engine, which is originally developed by Baidu engineer

514 Dec 28, 2022
Binary Stochastic Neurons in PyTorch

Binary Stochastic Neurons in PyTorch http://r2rt.com/binary-stochastic-neurons-in-tensorflow.html https://github.com/pytorch/examples/tree/master/mnis

Onur Kaplan 54 Nov 21, 2022
A minimalist environment for decision-making in autonomous driving

highway-env A collection of environments for autonomous driving and tactical decision-making tasks An episode of one of the environments available in

Edouard Leurent 1.6k Jan 07, 2023
GLM (General Language Model)

GLM GLM is a General Language Model pretrained with an autoregressive blank-filling objective and can be finetuned on various natural language underst

THUDM 421 Jan 04, 2023
Open Source Light Field Toolbox for Super-Resolution

BasicLFSR BasicLFSR is an open-source and easy-to-use Light Field (LF) image Super-Ressolution (SR) toolbox based on PyTorch, including a collection o

Squidward 50 Nov 18, 2022
Tensorflow 2.x based implementation of EDSR, WDSR and SRGAN for single image super-resolution

Single Image Super-Resolution with EDSR, WDSR and SRGAN A Tensorflow 2.x based implementation of Enhanced Deep Residual Networks for Single Image Supe

Martin Krasser 1.3k Jan 06, 2023
FastCover: A Self-Supervised Learning Framework for Multi-Hop Influence Maximization in Social Networks by Anonymous.

FastCover: A Self-Supervised Learning Framework for Multi-Hop Influence Maximization in Social Networks by Anonymous.

0 Apr 02, 2021
UNAVOIDS: Unsupervised and Nonparametric Approach for Visualizing Outliers and Invariant Detection Scoring

UNAVOIDS: Unsupervised and Nonparametric Approach for Visualizing Outliers and Invariant Detection Scoring Code Summary aggregate.py: this script aggr

1 Dec 28, 2021