Code and Experiments for ACL-IJCNLP 2021 Paper Mind Your Outliers! Investigating the Negative Impact of Outliers on Active Learning for Visual Question Answering.

Overview

Mind Your Outliers!

Mind Your Outliers! Investigating the Negative Impact of Outliers on Active Learning for Visual Question Answering
Siddharth Karamcheti, Ranjay Krishna, Li Fei-Fei, Christopher D. Manning
Annual Meeting for the Association of Computational Linguistics (ACL-IJCNLP) 2021.

Code & Experiments for training various models and performing active learning on a variety of different VQA datasets and splits. Additional code for creating and visualizing dataset maps, for qualitative analysis!

If there are any trained models you want access to that aren't easy for you to train, please let me know and I will do my best to get them to you. Unfortunately finding a hosting solution for 1.8TB of checkpoints hasn't been easy 😅 .


Quickstart

Clones vqa-outliers to the current working directory, then walks through dependency setup, mostly leveraging the environments/environment- files. Assumes conda is installed locally (and is on your path!). Follow the directions here to install conda (Anaconda or Miniconda) if not.

We provide two installation directions -- one set of instructions for CUDA-equipped machines running Linux w/ GPUs (for training), and another for CPU-only machines (e.g., MacOS, Linux) geared towards local development and in case GPUs are not available.

The existing GPU YAML File is geared for CUDA 11.0 -- if you have older GPUs, file an issue, and I'll create an appropriate conda configuration!

Setup Instructions

# Clone `vqa-outliers` Repository and run Conda Setup
git clone https://github.com/siddk/vqa-outliers.git
cd vqa-outliers

# Ensure you're using the appropriate hardware config!
conda env create -f environments/environment-{cpu, gpu}.yaml
conda activate vqa-outliers

Usage

The following section walks through downloading all the necessary data (be warned -- it's a lot!) and running both the various active learning strategies on the given VQA datasets, as well as the code for generating Dataset Maps over the full dataset, and visualizing active learning acquisitions relative to those maps.

Note: This is going to require several hundred GB of disk space -- for targeted experiments, feel free to file an issue and I can point you to what you need!

Downloading Data

We have dependencies on a few datasets, some pretrained word vectors (GloVe), and a pretrained multimodal model (LXMERT), though not the one commonly released in HuggingFace Transformers. To download all dependencies, use the following commands from the root of this repository (in general, run everything from repository root!).

# Note: All the following will create/write to the directory data/ in the current repository -- feel free to change!

# GloVe Vectors
./scripts/download/glove.sh

# Download LXMERT Checkpoint (no-QA Pretraining)
./scripts/download/lxmert.sh

# Download VQA-2 Dataset (Entire Thing -- Questions, Raw Images, BottomUp Object Features)!
./scripts/download/vqa2.sh

# Download GQA Dataset (Entire Thing -- Questions, Raw Images, BottomUp Object Features)!
./scripts/download/gqa.sh

Additional Preprocessing

Many of the models we evaluate in this work use the object-based BottomUp-TopDown Attention Features -- however, our Grid Logistic Regression and LSTM-CNN Baseline both use dense ResNet-101 Features of the images. We extract these from the raw images ourselves as follows (again, this will take a ton of disk space):

# Note: GPU Recommended for Faster Extraction

# Extract VQA-2 Grid Features
python scripts/extract.py --dataset vqa2 --images data/VQA-Images --spatial data/VQA-Spatials

# Extract GQA Grid Features
python scripts/extract.py --dataset gqa --images data/GQA-Images --spatial data/GQA-Spatials

Running Active Learning

Running Active Learning is a simple matter of using the script active.py in the root of this directory. This script is able to reproduce every experiment from the paper, and allows you to specify the following:

  • Dataset in < vqa2 | gqa >
  • Split in < all | sports | food > (for VQA-2) and all for GQA
  • Model (mode) in < glreg | olreg | cnn | butd | lxmert > (Both Logistic Regression Models, LSTM-CNN, BottomUp-TopDown, and LXMERT, respectively)
  • Active Learning Strategy in < baseline | least-conf | entropy | mc-entropy | mc-bald | coreset-{fused, language, vision} > following the paper.
  • Size of Seed Set (burn, for burn-in) in < p05 | p10 | p25 | p50 > where each denotes percentage of full-dataset to use as seed set.

For example, to run the BottomUp-TopDown Attention Model (butd) with the VQA-2 Sports Dataset, with Bayesian Active Learning by Disagreement, with a seed set that's 10% the size of the original dataset, use the following:

# Note: If GPU available (recommended), pass --gpus 1 as well!
python active.py --dataset vqa2 --split sports --mode butd --burn p10 --strategy mc-bald

File an issue if you run into trouble!

Creating Dataset Maps

Creating a Dataset Map entails training a model on an entire dataset, while maintaining statistics on a per-example basis, over the course of training. To train models and dump these statistics, use the top-level file cartograph.py as follows (again, for the BottomUp-TopDown Model, on VQA2-Sports):

python cartograph.py --dataset vqa2 --split sports --mode butd

Once you've trained a model and generated the necessary statistics, you can plot the corresponding map using the top-level file chart.py as follows:

# Note: `map` mode only generates the dataset map... to generate acquisition plots, see below!
python chart.py --mode map --dataset vqa2 --split sports --model butd

Note that Dataset Maps are generated per-dataset, per-model!

Visualizing Acquisitions

To visualize the acquisitions of a given active learning strategy relative to a given dataset map (the bar graphs from our paper), you can run the following (again, with our running example, but works for any combination):

python chart.py --mode acquisitions --dataset vqa2 --split sports --model butd --burn p10 --strategies mc-bald

Note that the script chart.py defaults to plotting acquisitions for all active learning strategies -- either make sure to run these out for the configuration you want, or provide the appropriate arguments!

Ablating Outliers

Finally, to run the Outlier Ablation experiments for a given model/active learning strategy, take the following steps:

  • Identify the different "frontiers" of examples (different difficulty classes) by using scripts/frontier.py
  • Once this file has been generated, run active.py with the special flag --dataset vqa2-frontier and the arbitrary strategies you care about.
  • Sit back, examine the results, and get excited!

Concretely, you can generate the frontier files for a BottomUp-TopDown Attention Model as follows:

python scripts/frontier.py --model butd

Any other model would also work -- just make sure you've generated the map via cartograph.py first!


Results

We present the full set of results from the paper (and the additional results from the supplement) in the visualizations/ directory. The sub-directory active-learning shows performance vs. samples for various splits of strategies (visualizing all on the same plot is a bit taxing), while the sub-directory acquisitions has both the dataset maps and corresponding acquisitions per strategy!


Start-Up (from Scratch)

Use these commands if you're starting a repository from scratch (this shouldn't be necessary to use/build off of this code, but I like to keep this in the README in case things break in the future). Generally, you should be fine with the "Usage" section above!

Linux w/ GPU & CUDA 11.0

# Create Python Environment (assumes Anaconda -- replace with package manager of choice!)
conda create --name vqa-outliers python=3.8
conda activate vqa-outliers
conda install pytorch torchvision torchaudio cudatoolkit=11.0 -c pytorch
conda install ipython jupyter
conda install pytorch-lightning -c conda-forge

pip install typed-argument-parser h5py opencv-python matplotlib annoy seaborn spacy scipy transformers scikit-learn

Mac OS & Linux (CPU)

# Create Python Environment (assumes Anaconda -- replace with package manager of choice!)
conda create --name vqa-outliers python=3.8
conda activate vqa-outliers
conda install pytorch torchvision torchaudio -c pytorch
conda install ipython jupyter
conda install pytorch-lightning -c conda-forge

pip install typed-argument-parser h5py opencv-python matplotlib annoy seaborn spacy scipy transformers scikit-learn

Note

We are committed to maintaining this repository for the community. We did port this code up to latest versions of PyTorch-Lightning and PyTorch, so there may be small incompatibilities we didn't catch when testing -- please feel free to open an issue if you run into problems, and I will respond within 24 hours. If urgent, please shoot me an email at [email protected] with "VQA-Outliers Code" in the Subject line and I'll be happy to help!

Owner
Sidd Karamcheti
PhD Student at Stanford & Research Intern at Hugging Face 🤗
Sidd Karamcheti
Merlion: A Machine Learning Framework for Time Series Intelligence

Merlion: A Machine Learning Library for Time Series Table of Contents Introduction Installation Documentation Getting Started Anomaly Detection Foreca

Salesforce 2.8k Dec 30, 2022
RIFE - Real-Time Intermediate Flow Estimation for Video Frame Interpolation

RIFE - Real-Time Intermediate Flow Estimation for Video Frame Interpolation YouTube | BiliBili 16X interpolation results from two input images: Introd

旷视天元 MegEngine 28 Dec 09, 2022
PlenOctree Extraction algorithm

PlenOctrees_NeRF-SH This is an implementation of the Paper PlenOctrees for Real-time Rendering of Neural Radiance Fields. Not only the code provides t

49 Nov 05, 2022
Practical tutorials and labs for TensorFlow used by Nvidia, FFN, CNN, RNN, Kaggle, AE

TensorFlow Tutorial - used by Nvidia Learn TensorFlow from scratch by examples and visualizations with interactive jupyter notebooks. Learn to compete

Alexander R Johansen 1.9k Dec 19, 2022
A Pytorch Implementation of Domain adaptation of object detector using scissor-like networks

A Pytorch Implementation of Domain adaptation of object detector using scissor-like networks Please follow Faster R-CNN and DAF to complete the enviro

2 Oct 07, 2022
LSTM built using Keras Python package to predict time series steps and sequences. Includes sin wave and stock market data

LSTM Neural Network for Time Series Prediction LSTM built using the Keras Python package to predict time series steps and sequences. Includes sine wav

Jakob Aungiers 4.1k Jan 02, 2023
Udacity's CS101: Intro to Computer Science - Building a Search Engine

Udacity's CS101: Intro to Computer Science - Building a Search Engine All soluti

Phillip 0 Feb 26, 2022
Saeed Lotfi 28 Dec 12, 2022
Official code of our work, AVATAR: A Parallel Corpus for Java-Python Program Translation.

AVATAR Official code of our work, AVATAR: A Parallel Corpus for Java-Python Program Translation. AVATAR stands for jAVA-pyThon progrAm tRanslation. AV

Wasi Ahmad 26 Dec 03, 2022
Code for NeurIPS 2021 paper 'Spatio-Temporal Variational Gaussian Processes'

Spatio-Temporal Variational GPs This repository is the official implementation of the methods in the publication: O. Hamelijnck, W.J. Wilkinson, N.A.

AaltoML 26 Sep 16, 2022
The 1st place solution of track2 (Vehicle Re-Identification) in the NVIDIA AI City Challenge at CVPR 2021 Workshop.

AICITY2021_Track2_DMT The 1st place solution of track2 (Vehicle Re-Identification) in the NVIDIA AI City Challenge at CVPR 2021 Workshop. Introduction

Hao Luo 91 Dec 21, 2022
Direct design of biquad filter cascades with deep learning by sampling random polynomials.

IIRNet Direct design of biquad filter cascades with deep learning by sampling random polynomials. Usage git clone https://github.com/csteinmetz1/IIRNe

Christian J. Steinmetz 55 Nov 02, 2022
This repository contains the implementation of the HealthGen model, a generative model to synthesize realistic EHR time series data with missingness

HealthGen: Conditional EHR Time Series Generation This repository contains the implementation of the HealthGen model, a generative model to synthesize

0 Jan 20, 2022
Randomizes the warps in a stock pokeemerald repo.

pokeemerald warp randomizer Randomizes the warps in a stock pokeemerald repo. Usage Instructions Install networkx and matplotlib via pip3 or similar.

Max Thomas 6 Mar 17, 2022
Implementation of Multistream Transformers in Pytorch

Multistream Transformers Implementation of Multistream Transformers in Pytorch. This repository deviates slightly from the paper, where instead of usi

Phil Wang 47 Jul 26, 2022
Article Reranking by Memory-enhanced Key Sentence Matching for Detecting Previously Fact-checked Claims.

MTM This is the official repository of the paper: Article Reranking by Memory-enhanced Key Sentence Matching for Detecting Previously Fact-checked Cla

ICTMCG 13 Sep 17, 2022
Simulation-based inference for the Galactic Center Excess

Simulation-based inference for the Galactic Center Excess Siddharth Mishra-Sharma and Kyle Cranmer Abstract The nature of the Fermi gamma-ray Galactic

Siddharth Mishra-Sharma 3 Jan 21, 2022
Unofficial Pytorch Implementation of WaveGrad2

WaveGrad 2 — Unofficial PyTorch Implementation WaveGrad 2: Iterative Refinement for Text-to-Speech Synthesis Unofficial PyTorch+Lightning Implementati

MINDs Lab 104 Nov 29, 2022
E2EDNA2 - An automated pipeline for simulation of DNA aptamers complexed with small molecules and short peptides

E2EDNA2 - An automated pipeline for simulation of DNA aptamers complexed with small molecules and short peptides

11 Nov 08, 2022
The official implementation code of "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction."

PlantStereo This is the official implementation code for the paper "PlantStereo: A Stereo Matching Benchmark for Plant Surface Dense Reconstruction".

Wang Qingyu 14 Nov 28, 2022