Pytorch implementation for "Adversarial Robustness under Long-Tailed Distribution" (CVPR 2021 Oral)

Overview

Adversarial Long-Tail

This repository contains the PyTorch implementation of the paper:

Adversarial Robustness under Long-Tailed Distribution, CVPR 2021 (Oral)

Tong Wu, Ziwei Liu, Qingqiu Huang, Yu Wang, Dahua Lin

Real-world data usually exhibits a long-tailed distribution, while previous works on adversarial robustness mainly focus on balanced datasets. To push adversarial robustness towards more realistic scenarios, in this work, we investigate the adversarial vulnerability as well as defense under long-tailed distributions. We perform a systematic study on existing Long-Tailed recognition (LT) methods in conjunction with the Adversarial Training framework (AT) and obtain several valuable observations. We then propose a clean yet effective framework, RoBal, which consists of two dedicated modules, a scale-invariant classifier and data re-balancing via both margin engineering at the training stage and boundary adjustment during inference.

This repository includes:

  • Code for the LT methods applied with AT framework in our study.
  • Code and pre-trained models for our method.

Environment

Datasets

We use the CIFAR-10-LT and CIFAR-100-LT datasets. The data will be automatically downloaded and converted.

Usage

Baseline

To train and evaluate a baseline model, run the following commands:

# Vanilla FC for CIFAR-10-LT
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat.yaml
python test.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat.yaml -a ALL

# Vanilla FC for CIFAR-100-LT
python train.py configs/CIFAR100_LT/cifar100_LT0.1_pgdat.yaml
python test.py configs/CIFAR100_LT/cifar100_LT0.1_pgdat.yaml -a ALL

Here -a ALL denotes that we evaluate five attacks including FGSM, PGD, MIM, CW, and AutoAttack.

Long-tailed recognition methods with adversarial training framework

We provide scripts for the long-tailed recognition methods applied with adversarial training framework as reported in our study. We mainly provide config files for CIFAR-10-LT. For CIFAR-100-LT, simply set imbalance_ratio=0.1, dataset=CIFAR100, and num_classes=100 in the config file, and don't forget to change the model_dir (working directory to save the log files and checkpoints) and model_path (checkpoint to evaluate by test.py).

Methods applied at training time.

Methods applied at training stage include class-aware re-sampling and different kinds of cost-sensitive learning.

Train the models with the corresponding config files:

# Vanilla Cos
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_cos.yaml

# Class-aware margin
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_outer_LDAM.yaml

# Cosine with margin
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_cos_HE.yaml

# Class-aware temperature
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_outer_CDT.yaml

# Class-aware bias
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_outer_logitadjust.yaml

# Hard-exmaple mining
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_outer_focal.yaml

# Re-sampling
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_rs-whole.yaml

# Re-weighting (based on effective number of samples)
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_outer_CB.yaml

Evaluate the models with the same config files as training time:

python test.py <the-config-file-used-for-training>.yaml -a ALL

Methods applied via fine-tuning.

Fine-tuning based methods propose to re-train or fine-tune the classifier via data re-balancing techniques with the backbone frozen.

Train a baseline model first, and then set the load_model in the following config files as <folder-name-of-the-baseline-model>/epoch80.pt (path to the last-epoch checkpoint; we have already aligned the settings of directories in this repo). Run fine-tuning by:

# One-epoch re-sampling
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_rs-fine.yaml

# One-epoch re-weighting
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_rw-fine.yaml 

# Learnable classifier scale
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_lws.yaml 

Evaluate the models with the same config files as training time:

python test.py <the-config-file-used-for-training>.yaml -a ALL

Methods applied at inference time.

Methods applied at the inference stage based on a vanilla trained model would usually conduct a different forwarding process from the training stage to address shifted data distributions from train-set to test-set.

Similarly, train a baseline model first, and this time set the model_path in the following config files as <folder-name-of-the-baseline-model>/epoch80.pt (path to the last-epoch checkpoint; we have already aligned the settings of directories in this repo). Run evaluation with a certain inference-time strategy by:

# Classifier re-scaling
python test.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_post_CDT.yaml -a ALL

# Classifier normalization
python test.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_post_norm.yaml -a ALL

# Class-aware bias
python test.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_post_logitadjust.yaml -a ALL

Sometimes a baseline model is not applicable, since a cosine classifier is used with some statistics recorded during training. For example, to apply the method below, train the model by:

# Feature disentangling
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_TDESim.yaml 

Change the posthoc setting in the config file as True, and evaluate the model by:

python test.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_TDESim.yaml -a ALL

Attention: methods that involve loss temperatures or classifier scaling operations could be at the risk of producing unexpectedly higher robustness accuracy for PGD and MIM attacks, which is NOT reliable as analyzed in Sec.3.3 of our paper. This phenomenon sometimes could be observed at validation time during training. As a result, for a more reliable evaluation, it is essential to keep a similar level of logit scales during both training and inference stage.

Our method

The config files used for training and inference stage could be different, denoted by <config-prefix>_train.yaml and <config-prefix>_eval.yaml, respectively.

Training stage

Train the models by running:

# CIFAR-10-LT
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_robal_N_train.yaml
python train.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_robal_R_train.yaml

# CIFAR-100-LT
python train.py configs/CIFAR100_LT/cifar100_LT0.1_pgdat_robal_N_train.yaml
python train.py configs/CIFAR100_LT/cifar100_LT0.1_pgdat_robal_R_train.yaml

Attention: notice that by the end of the training stage, the evaluation results with the original training config file would miss the re-balancing strategy applied at inference state, thus we should change to the evaluation config file to complete the process.

Inference stage

Evaluate the models by running:

# CIFAR-10-LT
python test.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_robal_N_eval.yaml -a ALL
python test.py configs/CIFAR10_LT/cifar10_LT0.02_pgdat_robal_R_eval.yaml -a ALL

# CIFAR-100-LT
python test.py configs/CIFAR100_LT/cifar100_LT0.1_pgdat_robal_N_eval.yaml -a ALL
python test.py configs/CIFAR100_LT/cifar100_LT0.1_pgdat_robal_R_eval.yaml -a ALL

Pre-trained models

We provide the pre-trained models for our methods above. Download and extract them to the ./checkpoints directory, and produce the results with eval.yaml in the corresponding folders by running:

python test.py checkpoints/<folder-name-of-the-pretrained-model>/eval.yaml -a ALL

License and Citation

If you find our code or paper useful, please cite our paper:

@inproceedings{wu2021advlt,
 author =  {Tong Wu, Ziwei Liu, Qingqiu Huang, Yu Wang, and Dahua Lin},
 title = {Adversarial Robustness under Long-Tailed Distribution},
 booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
 year = {2021}
 }

Acknowledgement

We thank the authors for the following repositories for code reference: TRADES, AutoAttack, ADT, Class-Balanced Loss, LDAM-DRW, OLTR, AT-HE, Classifier-Balancing, mma_training, TDE, etc.

Contact

Please contact @wutong16 for questions, comments and reporting bugs.

Owner
Tong WU
Tong WU
Prototype for Baby Action Detection and Classification

Baby Action Detection Table of Contents About Install Run Predictions Demo About An attempt to harness the power of Deep Learning to come up with a so

Shreyas K 30 Dec 16, 2022
Mixed Transformer UNet for Medical Image Segmentation

MT-UNet Update 2021/11/19 Thank you for your interest in our work. We have uploaded the code of our MTUNet to help peers conduct further research on i

dotman 92 Dec 25, 2022
Bachelor's Thesis in Computer Science: Privacy-Preserving Federated Learning Applied to Decentralized Data

federated is the source code for the Bachelor's Thesis Privacy-Preserving Federated Learning Applied to Decentralized Data (Spring 2021, NTNU) Federat

Dilawar Mahmood 25 Nov 30, 2022
This repo is about implementing different approaches of pose estimation and also is a sub-task of the smart hospital bed project :smile:

Pose-Estimation This repo is a sub-task of the smart hospital bed project which is about implementing the task of pose estimation 😄 Many thanks to th

Max 11 Oct 17, 2022
PyTorch implementation of SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching

SMODICE: Versatile Offline Imitation Learning via State Occupancy Matching This is the official PyTorch implementation of SMODICE: Versatile Offline I

Jason Ma 14 Aug 30, 2022
Pytorch version of VidLanKD: Improving Language Understanding viaVideo-Distilled Knowledge Transfer

VidLanKD Implementation of VidLanKD: Improving Language Understanding via Video-Distilled Knowledge Transfer by Zineng Tang, Jaemin Cho, Hao Tan, Mohi

Zineng Tang 54 Dec 20, 2022
All the essential resources and template code needed to understand and practice data structures and algorithms in python with few small projects to demonstrate their practical application.

Data Structures and Algorithms Python INDEX 1. Resources - Books Data Structures - Reema Thareja competitiveCoding Big-O Cheat Sheet DAA Syllabus Inte

Shushrut Kumar 129 Dec 15, 2022
Keras Implementation of The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation by (Simon Jégou, Michal Drozdzal, David Vazquez, Adriana Romero, Yoshua Bengio)

The One Hundred Layers Tiramisu: Fully Convolutional DenseNets for Semantic Segmentation: Work In Progress, Results can't be replicated yet with the m

Yad Konrad 196 Aug 30, 2022
Caffe models in TensorFlow

Caffe to TensorFlow Convert Caffe models to TensorFlow. Usage Run convert.py to convert an existing Caffe model to TensorFlow. Make sure you're using

Saumitro Dasgupta 2.8k Dec 31, 2022
Distributed Evolutionary Algorithms in Python

DEAP DEAP is a novel evolutionary computation framework for rapid prototyping and testing of ideas. It seeks to make algorithms explicit and data stru

Distributed Evolutionary Algorithms in Python 4.9k Jan 05, 2023
Full-featured Decision Trees and Random Forests learner.

CID3 This is a full-featured Decision Trees and Random Forests learner. It can save trees or forests to disk for later use. It is possible to query tr

Alejandro Penate-Diaz 3 Aug 15, 2022
Visual Memorability for Robotic Interestingness via Unsupervised Online Learning (ECCV 2020 Oral and TRO)

Visual Interestingness Refer to the project description for more details. This code based on the following paper. Chen Wang, Yuheng Qiu, Wenshan Wang,

Chen Wang 36 Sep 08, 2022
Code for Robust Contrastive Learning against Noisy Views

Robust Contrastive Learning against Noisy Views This repository provides a PyTorch implementation of the Robust InfoNCE loss proposed in paper Robust

Ching-Yao Chuang 53 Jan 08, 2023
CMP 414/765 course repository for Spring 2022 semester

CMP414/765: Artificial Intelligence Spring2021 This is the GitHub repository for course CMP 414/765: Artificial Intelligence taught at The City Univer

ch00226855 4 May 16, 2022
A fast Protein Chain / Ligand Extractor and organizer.

Are you tired of using visualization software, or full blown suites just to separate protein chains / ligands ? Are you tired of organizing the mess o

Amine Abdz 9 Nov 06, 2022
Single Red Blood Cell Hydrodynamic Traps Via the Generative Design

Rbc-traps-generative-design - The generative design for single red clood cell hydrodynamic traps using GEFEST framework

Natural Systems Simulation Lab 4 Jun 16, 2022
an implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation using PyTorch

revisiting-sepconv This is a reference implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation [1] using PyTorch. Given two f

Simon Niklaus 59 Dec 22, 2022
[CVPR 2021] Generative Hierarchical Features from Synthesizing Images

[CVPR 2021] Generative Hierarchical Features from Synthesizing Images

GenForce: May Generative Force Be with You 148 Dec 09, 2022
VoxHRNet - Whole Brain Segmentation with Full Volume Neural Network

VoxHRNet This is the official implementation of the following paper: Whole Brain Segmentation with Full Volume Neural Network Yeshu Li, Jonathan Cui,

Microsoft 12 Nov 24, 2022
Pytorch implementation of COIN, a framework for compression with implicit neural representations 🌸

COIN 🌟 This repo contains a Pytorch implementation of COIN: COmpression with Implicit Neural representations, including code to reproduce all experim

Emilien Dupont 104 Dec 14, 2022