A Python library for adversarial machine learning focusing on benchmarking adversarial robustness.

Overview

ARES

This repository contains the code for ARES (Adversarial Robustness Evaluation for Safety), a Python library for adversarial machine learning research focusing on benchmarking adversarial robustness on image classification correctly and comprehensively.

We benchmark the adversarial robustness using 15 attacks and 16 defenses under complete threat models, which is described in the following paper

Benchmarking Adversarial Robustness on Image Classification (CVPR 2020, Oral)

Yinpeng Dong, Qi-An Fu, Xiao Yang, Tianyu Pang, Hang Su, Zihao Xiao, and Jun Zhu.

Feature overview:

  • Built on TensorFlow, and support TensorFlow & PyTorch models with the same interface.
  • Support many attacks in various threat models.
  • Provide ready-to-use pre-trained baseline models (8 on ImageNet & 8 on CIFAR10).
  • Provide efficient & easy-to-use tools for benchmarking models.

Citation

If you find ARES useful, you could cite our paper on benchmarking adversarial robustness using all models, all attacks & defenses supported in ARES. We provide a BibTeX entry of this paper below:

@inproceedings{dong2020benchmarking,
  title={Benchmarking Adversarial Robustness on Image Classification},
  author={Dong, Yinpeng and Fu, Qi-An and Yang, Xiao and Pang, Tianyu and Su, Hang and Xiao, Zihao and Zhu, Jun},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
  pages={321--331},
  year={2020}
}

Installation

Since ARES is still under development, please clone the repository and install the package:

git clone https://github.com/thu-ml/ares
cd ares/
pip install -e .

The requirements.txt includes its dependencies, you might want to change PyTorch's version as well as TensorFlow 1's version. TensorFlow 1.13 or later should work fine.

As for python version, Python 3.5 or later should work fine.

The Boundary attack and the Evolutionary attack require mpi4py and a working MPI with enough localhost slots. For example, you could set the OMPI_MCA_rmaps_base_oversubscribe environment variable to yes for OpenMPI.

Download Datasets & Model Checkpoints

By default, ARES would save datasets and model checkpoints under the ~/.ares directory. You could override it by setting the ARES_RES_DIR environment variable to an alternative location.

We support 2 datasets: CIFAR-10 and ImageNet.

To download the CIFAR-10 dataset, please run:

python3 ares/dataset/cifar10.py

To download the ImageNet dataset, please run:

python3 ares/dataset/imagenet.py

for instructions.

ARES includes third party models' code in the third_party/ directory as git submodules. Before you use these models, you need to initialize these submodules:

git submodule init
git submodule update --depth 1

The example/cifar10 directory and example/imagenet directories include wrappers for these models. Run the model's .py file to download its checkpoint or view instructions for downloading. For example, if you want to download the ResNet56 model's checkpoint, please run:

python3 example/cifar10/resnet56.py

Documentation

We provide API docs as well as tutorials at https://thu-ml-ares.rtfd.io/.

Quick Examples

ARES provides command line interface to run benchmarks. For example, to run distortion benchmark on ResNet56 model for CIFAR-10 dataset using CLI:

python3 -m ares.benchmark.distortion_cli --method mim --dataset cifar10 --offset 0 --count 1000 --output mim.npy example/cifar10/resnet56.py --distortion 0.1 --goal ut --distance-metric l_inf --batch-size 100 --iteration 10 --decay-factor 1.0 --logger

This command would find the minimal adversarial distortion achieved using the MIM attack with decay factor of 1.0 on the example/cifar10/resnet56.py model with L∞ distance and save the result to mim.npy.

For more examples and usages (e.g. how to define new models), please browse our documentation website mentioned before.

Acknowledgement

This work was supported by the National Key Research and Development Program of China, Beijing Academy of Artificial Intelligence (BAAI), a grant from Tsinghua Institute for Guo Qiang.

Owner
Tsinghua Machine Learning Group
Tsinghua Machine Learning Group
PanopticBEV - Bird's-Eye-View Panoptic Segmentation Using Monocular Frontal View Images

Bird's-Eye-View Panoptic Segmentation Using Monocular Frontal View Images This r

63 Dec 16, 2022
Context Decoupling Augmentation for Weakly Supervised Semantic Segmentation

Context Decoupling Augmentation for Weakly Supervised Semantic Segmentation The code of: Context Decoupling Augmentation for Weakly Supervised Semanti

54 Dec 12, 2022
Official code release for 3DV 2021 paper Human Performance Capture from Monocular Video in the Wild.

Official code release for 3DV 2021 paper Human Performance Capture from Monocular Video in the Wild.

Chen Guo 58 Dec 24, 2022
Pytorch implementation of our paper LIMUSE: LIGHTWEIGHT MULTI-MODAL SPEAKER EXTRACTION.

LiMuSE Overview Pytorch implementation of our paper LIMUSE: LIGHTWEIGHT MULTI-MODAL SPEAKER EXTRACTION. LiMuSE explores group communication on a multi

Auditory Model and Cognitive Computing Lab 17 Oct 26, 2022
Few-NERD: Not Only a Few-shot NER Dataset

Few-NERD: Not Only a Few-shot NER Dataset This is the source code of the ACL-IJCNLP 2021 paper: Few-NERD: A Few-shot Named Entity Recognition Dataset.

THUNLP 319 Dec 30, 2022
Easy way to add GoogleMaps to Flask applications. maintainer: @getcake

Flask Google Maps Easy to use Google Maps in your Flask application requires Jinja Flask A google api key get here Contribute To contribute with the p

Flask Extensions 611 Dec 05, 2022
[ICCV 2021] Focal Frequency Loss for Image Reconstruction and Synthesis

Focal Frequency Loss - Official PyTorch Implementation This repository provides the official PyTorch implementation for the following paper: Focal Fre

Liming Jiang 460 Jan 04, 2023
ML models implementation practice

Let's implement various ML algorithms with numpy/tf Vanilla Neural Network https://towardsdatascience.com/lets-code-a-neural-network-in-plain-numpy-ae

Jinsoo Heo 4 Jul 04, 2021
The code for the NSDI'21 paper "BMC: Accelerating Memcached using Safe In-kernel Caching and Pre-stack Processing".

BMC The code for the NSDI'21 paper "BMC: Accelerating Memcached using Safe In-kernel Caching and Pre-stack Processing". BibTex entry available here. B

Orange 383 Dec 16, 2022
Official Implementation of DE-CondDETR and DELA-CondDETR in "Towards Data-Efficient Detection Transformers"

DE-DETRs By Wen Wang, Jing Zhang, Yang Cao, Yongliang Shen, and Dacheng Tao This repository is an official implementation of DE-CondDETR and DELA-Cond

Wen Wang 41 Dec 12, 2022
Code for the ACL2021 paper "Lexicon Enhanced Chinese Sequence Labelling Using BERT Adapter"

Lexicon Enhanced Chinese Sequence Labeling Using BERT Adapter Code and checkpoints for the ACL2021 paper "Lexicon Enhanced Chinese Sequence Labelling

274 Dec 06, 2022
Efficient neural networks for analog audio effect modeling

micro-TCN Efficient neural networks for audio effect modeling

Christian Steinmetz 94 Dec 29, 2022
History Aware Multimodal Transformer for Vision-and-Language Navigation

History Aware Multimodal Transformer for Vision-and-Language Navigation This repository is the official implementation of History Aware Multimodal Tra

Shizhe Chen 46 Nov 23, 2022
Code for paper " AdderNet: Do We Really Need Multiplications in Deep Learning?"

AdderNet: Do We Really Need Multiplications in Deep Learning? This code is a demo of CVPR 2020 paper AdderNet: Do We Really Need Multiplications in De

HUAWEI Noah's Ark Lab 915 Jan 01, 2023
Generative Art Using Neural Visual Grammars and Dual Encoders

Generative Art Using Neural Visual Grammars and Dual Encoders Arnheim 1 The original algorithm from the paper Generative Art Using Neural Visual Gramm

DeepMind 231 Jan 05, 2023
DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers

DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers Authors: Jaemin Cho, Abhay Zala, and Mohit Bansal (

Jaemin Cho 98 Dec 15, 2022
[CVPR 2020] Local Class-Specific and Global Image-Level Generative Adversarial Networks for Semantic-Guided Scene Generation

Contents Local and Global GAN Cross-View Image Translation Semantic Image Synthesis Acknowledgments Related Projects Citation Contributions Collaborat

Hao Tang 131 Dec 07, 2022
ECCV2020 paper: Fashion Captioning: Towards Generating Accurate Descriptions with Semantic Rewards. Code and Data.

This repo contains some of the codes for the following paper Fashion Captioning: Towards Generating Accurate Descriptions with Semantic Rewards. Code

Xuewen Yang 56 Dec 08, 2022
Aircraft design optimization made fast through modern automatic differentiation

Aircraft design optimization made fast through modern automatic differentiation. Plug-and-play analysis tools for aerodynamics, propulsion, structures, trajectory design, and much more.

Peter Sharpe 394 Dec 23, 2022
EMNLP 2021 Findings' paper, SCICAP: Generating Captions for Scientific Figures

SCICAP: Scientific Figures Dataset This is the Github repo of the EMNLP 2021 Findings' paper, SCICAP: Generating Captions for Scientific Figures (Hsu

Edward 26 Nov 21, 2022