Official repository for Jia, Raghunathan, Göksel, and Liang, "Certified Robustness to Adversarial Word Substitutions" (EMNLP 2019)

Overview

Certified Robustness to Adversarial Word Substitutions

This is the official GitHub repository for the following paper:

Certified Robustness to Adversarial Word Substitutions.
Robin Jia, Aditi Raghunathan, Kerem Göksel, and Percy Liang.
Empirical Methods in Natural Language Processing (EMNLP), 2019.

For full details on reproducing the results, see this Codalab worksheet, which contains all code, data, and experiments from the paper. This GitHub repository serves as an easy way to get started with the code, and has some additional instructions and documentation.

Setup

This code has been tested with python3.6, pytorch 1.3.1, numpy 1.15.4, and NLTK 3.4.

Download data dependencies by running the provided script:

./download_deps.sh

If you already have GloVe vectors on your system, it may be more convenient to comment out the part of download_deps.sh that downloads GloVe, and instead add a symlink to the directory containing the GloVe vectors at data/glove.

Interval Bound Propagation library

We have implemented many primitives for Interval Bound Propagation (IBP), which can be found in src/ibp.py. This code should be reusable and intuitive for anyone familiar with pytorch. When designing this library, our goal was to make it possible to write code that looks like standard pytorch code, but can be trained with IBP. Below, we give an overview of the code.

BoundedTensor

BoundedTensor is our version of torch.Tensor. It represents a tensor that additionally has some bounded set of possible values. The two most important subclasses of BoundedTensor are IntervalBoundedTensor and DiscreteChoiceTensor.

IntervalBoundedTensor

An IntervalBoundedTensor keeps track of three instance variables: an actual value, a coordinate-wise upper bound on the value, and a coordinate-wise lower bound on the value. All three of these are torch.Tensor objects. It also implements many standard methods of torch.Tensor.

DiscreteChoiceTensor

A DiscreteChoiceTensor represents a tensor that can take a discrete set of values. We use DiscreteChoiceTensor to represent the set of possible word vectors that can appear at each slice of the input. Importantly, DiscreteChoiceTensor.to_interval_bounded() converts a DiscreteChoiceTensor to an IntervalBoundedTensor by taking a coordinate-wise min/max.

NormBallTensor

We also provide NormBallTensor, which represents a p-norm ball of a given radius around a value.

Functions and layers

To go with BoundedTensor, we include functions and layers that know how to take BoundedTensor objects as inputs and return BoundedTensor objects as outputs. Most of these should be straightforward to use for folks familiar with their standard torch, torch.nn, and torch.nn.functional equivalents (with a caveat that not all flags in the standard library are necessarily supported).

Functions

Available implementations of basic torch functions include:

  • add
  • mul
  • div
  • bmm
  • cat
  • stack
  • sum

In many cases, we directly call the torch counterpart if the inputs are torch.Tensor objects. A few additional cases are described below.

Activation functions

Since monotonic functions all use the same IBP formula, we export a single function ibp.activation which can apply elementwise ReLU, sigmoid, tanh, or exp to an IntervalBoundedTensor.

Logsoftmax

We include a log_softmax() function that is equivalent to torch.nn.functional.log_softmax(). We strongly advise users to use this implementation rather than implementing their own softmax operation, as numerical instability can easily arise with a naive implementation.

Nonnegative matrix multiplication

We include matmul_nneg() function that handles matrix multiplication between two non-negative matrices, as this is simpler than the general case.

Layers (nn.Module objects)

Many basic layers are implemented by extending their torch.nn counterparts, including

  • Linear
  • Embedding
  • Conv1d
  • MaxPool1d
  • LSTM
  • Dropout

RNNs

Our library also includes LSTM and GRU classes, which extend nn.Module directly. These are unfortunately slower than their torch.nn counterparts, because the torch.nn RNN's use cuDNN.

Examples

If you want to see this library in action, a good place to start is BOWModel in src/text_classification.py. This implements a simple bag-of-words model for text classification. Note that in forward(), we accept a flag called compute_bounds which lets the user decide whether to run IBP or not.

Paper experiments

In this repository, we include a minimal set of commands and instructions to reproduce a few key results from our EMNLP 2019 paper. We will focus on the CNN model results on the IMDB dataset. To see other available command line flags, you can run python src/train.py -h.

If you are interested in reproducing our experiments, we recommend looking at the aforementioned Codalab worksheet, which shows how to reproduce all results in our paper. Note that the commands on Codalab include some extra flags (--neighbor-file, --glove-dir, --imdb-dir, and --snli-dir) that are used to specify non-default paths to files. These flags are unnecessary when following the instructions in this repository.

Training

Here are commands to train the CNN model on IMDB with standard training, certifiably robust training, and data augmentation.

Standard training

To train the baseline model without IBP, run the following:

python src/train.py classification cnn outdir_cnn_normal -d 100 --pool mean -T 10 --dropout-prob 0.2 -b 32 --save-best-only

This should get about 88% accuracy on dev (but 0% certified accuracy). outdir_cnn_normal is an output directory where model parameters and stats will be saved.

Certifiably robust training

To use certifiably robust training with IBP, run the following:

python src/train.py classification cnn outdir_cnn_cert -d 100 --pool mean -T 60 --full-train-epochs 20 -c 0.8 --dropout-prob 0.2 -b 32 --save-best-only

This should get about 81% accuracy and 66% certified accuracy on dev. Note that these results do not include language model constraints on the attack surface, and therefore the certified accuracy is a bit too low. These constraints will be enforced in the testing commands below.

Training with data augmentation

To train with data augmentation, run the following:

python src/train.py classification cnn outdir_cnn_aug -d 100 --pool mean -T 60 --augment-by 4 --dropout-prob 0.2 -b 32 --save-best-only

This should get about 85% accuracy and 84% augmented accuracy on dev (but 0% certified accuracy).

Testing

Next, we will show how to test the trained models using the genetic attack. The genetic attack heuristically searches for a perturbation that causes an error. In this phase, we also incorporate pre-computed language model scores that determine which perturbations are valid.

For example, let's say we want to use the trained model inside the outdir_cnn_cert directory. First, we choose a checkpoint based on the best certified accuracy on the dev set, say checkpoint 57. (Note: the training code with --save-best-only will save only the best model and the final model; stats on all checkpoints are logged in <outdir>/all_epoch_stats.json.)

This command will run the genetic attack:

python src/train.py classification cnn eval_cnn_cert -L outdir_cnn_cert --load-ckpt 57 -d 100 --pool mean -T 0 -b 1 -a genetic --adv-num-epochs 40 --adv-pop-size 60 --use-lm --downsample-to 1000

It should get about 80% standard accuracy, 72.5% certified accuracy, and 73% adversarial accuracy (i.e., accuracy against the genetic attack). For all models, you should find that adversarial accuracy is between standard accuracy and certified accuracy. For IMDB, we downsample to 1000 examples, as the genetic attack is pretty slow; the provided precomputed LM scores (in lm_scores) are only for the first 1000 examples in the train, development, and test sets. For SNLI, we use the entire development and test sets for evaluation.

Note: This code is sensitive to the version of NLTK you use. The LM prediction files provided here should work if you are using the current version of NLTK and have updated your nltk_data directory recently. The experiments on Codalab use an older NLTK version; you can download the LM files from Codalab if you need compatibility with older NLTK versions. NLTK version issues will result in a KeyError with an Unrecognized sentence message.

Running the language model yourself

If you want to precompute language model scores on other data, use the following instructions.

  1. Clone the following git repository:
git clone https://github.com/robinjia/l2w windweller-l2w
  1. Obtain pre-trained parameters and put them in a directory named l2w-params within that repository. Please contact us if you need a copy of the parameters.

  2. Adapt src/precompute_lm_scores.py for your dataset.

Evolution Strategies in PyTorch

Evolution Strategies This is a PyTorch implementation of Evolution Strategies. Requirements Python 3.5, PyTorch = 0.2.0, numpy, gym, universe, cv2 Wh

Andrew Gambardella 333 Nov 14, 2022
Source code for the NeurIPS 2021 paper "On the Second-order Convergence Properties of Random Search Methods"

Second-order Convergence Properties of Random Search Methods This repository the paper "On the Second-order Convergence Properties of Random Search Me

Adamos Solomou 0 Nov 13, 2021
Callable PyTrees and filtered JIT/grad transformations => neural networks in JAX.

Equinox Callable PyTrees and filtered JIT/grad transformations = neural networks in JAX Equinox brings more power to your model building in JAX. Repr

Patrick Kidger 909 Dec 30, 2022
A visualization tool to show a TensorFlow's graph like TensorBoard

tfgraphviz tfgraphviz is a module to visualize a TensorFlow's data flow graph like TensorBoard using Graphviz. tfgraphviz enables to provide a visuali

44 Nov 09, 2022
This repo contains research materials released by members of the Google Brain team in Tokyo.

Brain Tokyo Workshop 🧠 🗼 This repo contains research materials released by members of the Google Brain team in Tokyo. Past Projects Weight Agnostic

Google 1.2k Jan 02, 2023
Weakly-supervised object detection.

Wetectron Wetectron is a software system that implements state-of-the-art weakly-supervised object detection algorithms. Project CVPR'20, ECCV'20 | Pa

NVIDIA Research Projects 342 Jan 05, 2023
Heat transfer problemas solved using python

heat-transfer Heat transfer problems solved using python isolation-convection.py compares the temperature distribution on the problem as shown in the

2 Nov 14, 2021
Python 3 module to print out long strings of text with intervals of time inbetween

Python-Fastprint Python 3 module to print out long strings of text with intervals of time inbetween Install: pip install fastprint Sync Usage: from fa

Kainoa Kanter 2 Jun 27, 2022
A minimal implementation of face-detection models using flask, gunicorn, nginx, docker, and docker-compose

Face-Detection-flask-gunicorn-nginx-docker This is a simple implementation of dockerized face-detection restful-API implemented with flask, Nginx, and

Pooya-Mohammadi 30 Dec 17, 2022
DeepOBS: A Deep Learning Optimizer Benchmark Suite

DeepOBS - A Deep Learning Optimizer Benchmark Suite DeepOBS is a benchmarking suite that drastically simplifies, automates and improves the evaluation

Aaron Bahde 7 May 12, 2020
Generative Adversarial Text to Image Synthesis

Text To Image Synthesis This is a tensorflow implementation of synthesizing images. The images are synthesized using the GAN-CLS Algorithm from the pa

Hao 575 Jan 08, 2023
On the Adversarial Robustness of Visual Transformer

On the Adversarial Robustness of Visual Transformer Code for our paper "On the Adversarial Robustness of Visual Transformers"

Rulin Shao 35 Dec 14, 2022
An official TensorFlow implementation of “CLCC: Contrastive Learning for Color Constancy” accepted at CVPR 2021.

CLCC: Contrastive Learning for Color Constancy (CVPR 2021) Yi-Chen Lo*, Chia-Che Chang*, Hsuan-Chao Chiu, Yu-Hao Huang, Chia-Ping Chen, Yu-Lin Chang,

Yi-Chen (Howard) Lo 58 Dec 17, 2022
Uni-Fold: Training your own deep protein-folding models.

Uni-Fold: Training your own deep protein-folding models. This package provides and implementation of a trainable, Transformer-based deep protein foldi

DeepModeling 88 Jan 03, 2023
Graph Convolutional Networks for Temporal Action Localization (ICCV2019)

Graph Convolutional Networks for Temporal Action Localization This repo holds the codes and models for the PGCN framework presented on ICCV 2019 Graph

Runhao Zeng 318 Dec 06, 2022
A python script to dump all the challenges locally of a CTFd-based Capture the Flag.

A python script to dump all the challenges locally of a CTFd-based Capture the Flag. Features Connects and logins to a remote CTFd instance. Dumps all

Podalirius 77 Dec 07, 2022
Official repository for the paper, MidiBERT-Piano: Large-scale Pre-training for Symbolic Music Understanding.

MidiBERT-Piano Authors: Yi-Hui (Sophia) Chou, I-Chun (Bronwin) Chen Introduction This is the official repository for the paper, MidiBERT-Piano: Large-

137 Dec 15, 2022
StyleGAN2-ADA-training-jupyter - Training custom datasets in styleGAN2-ADA by NVIDIA using Jupyter

styleGAN2-ADA-training-jupyter Training custom datasets in styleGAN2-ADA on Jupyter Official StyleGAN2-ADA by NIVIDIA Paper Training Generative Advers

Mang Su Hyun 2 Feb 24, 2022
Causal Imitative Model for Autonomous Driving

Causal Imitative Model for Autonomous Driving Mohammad Reza Samsami, Mohammadhossein Bahari, Saber Salehkaleybar, Alexandre Alahi. arXiv 2021. [Projec

VITA lab at EPFL 8 Oct 04, 2022
Preprossing-loan-data-with-NumPy - In this project, I have cleaned and pre-processed the loan data that belongs to an affiliate bank based in the United States.

Preprossing-loan-data-with-NumPy In this project, I have cleaned and pre-processed the loan data that belongs to an affiliate bank based in the United

Dhawal Chitnavis 2 Jan 03, 2022