Get Fooled for the Right Reason
Official repository for the NeurIPS 2021 paper Get Fooled for the Right Reason: Improving Adversarial Robustness through a Teacher-guided Curriculum Learning Approach
Dependencies
- Tensorflow 1.14.0
- Python 3.7
Datasets
CIFAR10: https://www.cs.toronto.edu/~kriz/cifar.html
Models
modelGTP_cifar10: https://www.dropbox.com/sh/29n2lt08ypjdw67/AABSZlD8nTM08E-bcZv1mdkOa?dl=0
Usage
- Install dependencies with
pip install -r requirements.txt. Prefarably, create an anaconda environment. - Download and save datasets in
datasets/folder. - Download and save model in
models/folder. - Run the
python eval_attack.py - The evaluation results will be stored in
attack_logdirectory.
Note
Using a GPU is highly recommended.
Code overview
model_new.py: contains code for IGAM model architectures.cifar10_input.pyprovides utility functions and classes for loading the CIFAR10 dataset.PGD_attack.py: generates adversarial examples and save them inattacks/.run_attack.py: evaluates model on adversarial examples fromattacks/.config_attack.py: parameters for adversarial example evaluation.eval_attack.py: runs FGSM, PGD-5, PGD-10, PGD-20 attacks and logs the results inattack_logdirectory. However, you can get results for any attack by modifying thenum_stepsflag in the code.
Acknowledgements
Useful code bases we used in our work:
- https://github.com/MadryLab/cifar10_challenge (for adversarial example generation and evaluation)
- https://github.com/ashafahi/free_adv_train (for model code)