Analysis of rationale selection in neural rationale models

Overview

Neural Rationale Interpretability Analysis

We analyze the neural rationale models proposed by Lei et al. (2016) and Bastings et al. (2019), as implemented in Interpretable Neural Predictions with Differentiable Binary Variables by Bastings et al. (2019). We have copied their original repository and build upon it with data perturbation analysis. Specifically, we implement a procedure to perturb sentences of the Stanford Sentiment Treebank (SST) data set and analyze the behavior of the models on the original and perturbed test sets.

Instructions

Installation

You need to have Python 3.6 or higher installed. First clone this repository.

Install all required Python packages using:

pip install -r requirements.txt

And finally download the data:

cd interpretable_predictions
./download_data_sst.sh

This will download the SST data (including filtered word embeddings).

Perturbed data and the model behavior on it is saved in data/sst/data_info.pickle, results/sst/latent_30pct/data_results.pickle, and results/sst/bernoulli_sparsity01505/data_results.pickle. To perform analysis on these, skip to the Plotting and Analysis section. To reproduce these results, continue as below.

Training on Stanford Sentiment Treebank (SST)

To train the latent (CR) rationale model to select 30% of text:

python -m latent_rationale.sst.train \
  --model latent --selection 0.3 --save_path results/sst/latent_30pct

To train the Bernoulli REINFORCE (PG) model with L0 penalty weight 0.01505:

python -m latent_rationale.sst.train \
  --model rl --sparsity 0.01505 --save_path results/sst/bernoulli_sparsity01505

Data Perturbation

To perform the data perturbation, run:

python -m latent_rationale.sst.perturb

This will save the data in data/sst/data_info.pickle.

Prediction and Rationale Selection

To run the latent model and get the rationale selection and prediction, run:

python -m latent_rationale.sst.predict_perturbed --ckpt results/sst/latent_30pct/

For the Bernoulli model, run:

python -m latent_rationale.sst.predict_perturbed --ckpt results/sst/bernoulli_sparsity01505/

These will save the rationale and prediction information in results/sst/latent_30pct/data_results.pickle and results/sst/bernoulli_sparsity01505/data_results.pickle for the two models, respectively.

Plotting and Analysis

To reconstruct the plots for the CR model, run:

python -m latent_rationale.sst.plots --ckpt results/sst/latent_30pct/

To run part of speech (POS) analysis for the CR model, run

python -m latent_rationale.sst.pos_analysis --ckpt results/sst/latent_30pct/

Perturbed Data Format

The perturbed data is stored as a dictionary where keys are indices (ranging from 0 to 2209, as the standard SST train/validation/test split has 2210 sentences). Each value is a dictionary with an original field, containing the original SST data instance, and a perturbed field which is a list of perturbed instances where each perturbed instance is a copy of the original instance but with one token substituted with a replacement. This is all saved in data/sst/data_info.pickle.

Owner
Yiming Zheng
Yiming Zheng
๐Ÿ•บFull body detection and tracking

Pose-Detection ๐Ÿค” Overview Human pose estimation from video plays a critical role in various applications such as quantifying physical exercises, sign

Abbas Ataei 20 Nov 21, 2022
A repository that finds a person who looks like you by using face recognition technology.

Find Your Twin Hello everyone, I've always wondered how casting agencies do the casting for a scene where a certain actor is young or old for a movie

Cengizhan Yurdakul 3 Jan 29, 2022
ElasticFace: Elastic Margin Loss for Deep Face Recognition

This is the official repository of the paper: ElasticFace: Elastic Margin Loss for Deep Face Recognition Paper on arxiv: arxiv Model Log file Pretrain

Fadi Boutros 113 Dec 14, 2022
Semantic Segmentation Architectures Implemented in PyTorch

pytorch-semseg Semantic Segmentation Algorithms Implemented in PyTorch This repository aims at mirroring popular semantic segmentation architectures i

Meet Shah 3.3k Dec 29, 2022
TC-GNN with Pytorch integration

TC-GNN (Running Sparse GNN on Dense Tensor Core on Ampere GPU) Cite this project and paper. @inproceedings{TC-GNN, title={TC-GNN: Accelerating Spars

YUKE WANG 19 Dec 01, 2022
Self-Adaptable Point Processes with Nonparametric Time Decays

NPPDecay This is our implementation for the paper Self-Adaptable Point Processes with Nonparametric Time Decays, by Zhimeng Pan, Zheng Wang, Jeff M. P

zpan 2 Sep 24, 2022
Real-CUGAN - Real Cascade U-Nets for Anime Image Super Resolution

Real Cascade U-Nets for Anime Image Super Resolution ไธญๆ–‡ | English ๐Ÿ”ฅ Real-CUGAN

tarsin 111 Dec 28, 2022
License Plate Detection Application

LicensePlate_Project ๐Ÿš— ๐Ÿš™ [Project] 2021.02 ~ 2021.09 License Plate Detection Application Overview 1. ๋ฐ์ดํ„ฐ ์ˆ˜์ง‘ ๋ฐ ๋ผ๋ฒจ๋ง ์ฐจ๋Ÿ‰ ๋ฒˆํ˜ธํŒ ์ด๋ฏธ์ง€๋ฅผ ์ง์ ‘ ์ˆ˜์ง‘ํ•˜์—ฌ ๊ฐ ์ด๋ฏธ์ง€์— ๋Œ€ํ•ด '๋ฒˆํ˜ธํŒ

4 Oct 10, 2022
An Inverse Kinematics library aiming performance and modularity

IKPy Demo Live demos of what IKPy can do (click on the image below to see the video): Also, a presentation of IKPy: Presentation. Features With IKPy,

Pierre Manceron 481 Jan 02, 2023
Code release for our paper, "SimNet: Enabling Robust Unknown Object Manipulation from Pure Synthetic Data via Stereo"

SimNet: Enabling Robust Unknown Object Manipulation from Pure Synthetic Data via Stereo Thomas Kollar, Michael Laskey, Kevin Stone, Brijen Thananjeyan

68 Dec 14, 2022
Machine learning, in numpy

numpy-ml Ever wish you had an inefficient but somewhat legible collection of machine learning algorithms implemented exclusively in NumPy? No? Install

David Bourgin 11.6k Dec 30, 2022
Deep Sea Treasure Environment for Multi-Objective Optimization Research

DeepSeaTreasure Environment Installation In order to get started with this environment, you can install it using the following command: python3 -m pip

imec IDLab 6 Nov 14, 2022
Reproduced Code for Image Forgery Detection papers.

Image Forgery Detection With over 4.5 billion active internet users, the amount of multimedia content being shared every day has surpassed everyoneโ€™s

Umar Masud 15 Dec 06, 2022
converts nominal survey data into a numerical value based on a dictionary lookup.

SWAP RATE Converts nominal survey data into a numerical values based on a dictionary lookup. It allows the user to switch nominal scale data from text

Jake Rhodes 1 Jan 18, 2022
Individual Treatment Effect Estimation

CAPE Individual Treatment Effect Estimation Run CAPE python train_causal.py --loop 10 -m cape_cau -d NI --i_t 1 Run a baseline model python train_cau

S. Deng 4 Sep 02, 2022
Large scale embeddings on a single machine.

Marius Marius is a system under active development for training embeddings for large-scale graphs on a single machine. Training on large scale graphs

Marius 107 Jan 03, 2023
An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise Weight Sharing) by Sensetime Research.

An integration of several popular automatic augmentation methods, including OHL (Online Hyper-Parameter Learning for Auto-Augmentation Strategy) and AWS (Improving Auto Augment via Augmentation Wise

45 Dec 08, 2022
What can linearized neural networks actually say about generalization?

What can linearized neural networks actually say about generalization? This is the source code to reproduce the experiments of the NeurIPS 2021 paper

gortizji 11 Dec 09, 2022
A highly efficient, fast, powerful and light-weight anime downloader and streamer for your favorite anime.

AnimDL - Download & Stream Your Favorite Anime AnimDL is an incredibly powerful tool for downloading and streaming anime. Core features Abuses the dev

KR 759 Jan 08, 2023
Highway networks implemented in PyTorch.

PyTorch Highway Networks Highway networks implemented in PyTorch. Just the MNIST example from PyTorch hacked to work with Highway layers. Todo Make th

Conner Vercellino 56 Dec 14, 2022