Skip to content

wangjksjtu/minmax-adv

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Min-Max Adversarial Attacks

[Paper] [arXiv] [Poster] [Slide] [Project Page]

Adversarial Attack Generation Empowered by Min-Max Optimization
Jingkang Wang, Tianyun Zhang, Sijia Liu, Pin-Yu Chen, Jiacen Xu, Makan Fardad, Bo Li
NeurIPS 2021


Revisit the strength of min-max optimization in the context of adversarial attack generation

Reproduce Main Results

Please check neurips21 folder for reproducing the robust adversarial attack results presented in the paper. We provide detailed instructions in neurips21/README.md and bash scripts neurips21/scripts. The code is based on tensorflow 1.x (tested from 1.10.0 - 1.15.0), which is a bit outdated. Currently, we do not have plans to upgrade it to tensorflow 2.x. If you do not aim to reproduce the exact numbers but use min-max attacks in your projects, we provide a pytorch implementation with latest pre-trained models (e.g., EfficientNet, ViT, etc) and ImageNet-1k supports. Please see the following section for more details.

Pytorch Implementation

TBD (stay tuned!)

Citation

If you find our code or paper useful, please consider citing

@inproceedings{wang2021adversarial,
    title={Adversarial Attack Generation Empowered by Min-Max Optimization},
    author={Wang, Jingkang and Zhang, Tianyun and Liu, Sijia  and Chen, Pin-Yu and Xu, Jiacen and Fardad, Makan and Li, Bo},
    booktitle={Advances in Neural Information Processing Systems (NeurIPS)},
    year={2021}
}

Questions/Bugs

Please submit a Github issue or contact wangjk@cs.toronto.edu if you have any questions or find any bugs.

About

Code for "Adversarial Attack Generation Empowered by Min-Max Optimization", NeurIPS 2021

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published