EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering

Related tags

Deep LearningMADE
Overview

MADE (Multi-Adapter Dataset Experts)

This repository contains the implementation of MADE (Multi-adapter dataset experts), which is described in the paper Single-dataset Experts for Multi-dataset Question Answering.

MADE combines a shared Transformer with a collection of adapters that are specialized to different reading comprehension datasets. See our paper for details.

Quick links

Requirements

The code uses Python 3.8, PyTorch, and the adapter-transformers library. Install the requirements with:

pip install -r requirements.txt

Download the data

You can download the datasets used in the paper from the repository for the MRQA 2019 shared task.

The datasets should be stored in directories ending with train or dev. For example, download the in-domain training datasets to a directory called data/train/ and download the in-domain development datasets to data/dev/.

For zero-shot and few-shot experiments, download the MRQA out-of-domain development datasets to a separate directory and split them into training and development splits using scripts/split_datasets.py. For example, download the datasets to data/transfer/ and run

ls data/transfer/* -1 | xargs -l python scripts/split_datasets.py

Use the default random seed (13) to replicate the splits used in the paper.

Download the trained models

The trained models are stored on the HuggingFace model hub at this URL: https://huggingface.co/princeton-nlp/MADE. All of the models are based on the RoBERTa-base model. They are:

To download just the MADE Transformer and adapters:

mkdir made_transformer
wget https://huggingface.co/princeton-nlp/MADE/resolve/main/made_transformer/model.pt -O made_transformer/model.pt

mkdir made_tuned_adapters
for d in SQuAD HotpotQA TriviaQA SearchQA NewsQA NaturalQuestions; do
  mkdir "made_tuned_adapters/${d}"
  wget "https://huggingface.co/princeton-nlp/MADE/resolve/main/made_tuned_adapters/${d}/model.pt" -O "made_tuned_adapters/${d}/model.pt"
done;

You can download all of the models at once by cloning the repository (first installing Git LFS):

git lfs install
git clone https://huggingface.co/princeton-nlp/MADE
mv MADE models

Run the model

The scripts in scripts/train/ and scripts/transfer/ provide examples of how to run the code. For more details, see the descriptions of the command line flags in run.py.

Train

You can use the scripts in scripts/train/ to train models on the MRQA datasets. For example, to train MADE:

./scripts/train/made_training.sh

And to tune the MADE adapters separately on individual datasets:

for d in SQuAD HotpotQA TriviaQA SearchQA NewsQA NaturalQuestions; do
  ./scripts/train/made_adapter_tuning.sh $d
done;

See run.py for details about the command line arguments.

Evaluate

A single fine-tuned model:

python run.py \
    --eval_on BioASQ DROP DuoRC RACE RelationExtraction TextbookQA \
    --load_from multi_dataset_ft \
    --output_dir output/zero_shot/multi_dataset_ft

An individual MADE adapter (e.g. SQuAD):

python run.py \
    --eval_on BioASQ DROP DuoRC RACE RelationExtraction TextbookQA \
    --load_from made_transformer \
    --load_adapters_from made_tuned_adapters \
    --adapter \
    --adapter_name SQuAD \
    --output_dir output/zero_shot/made_tuned_adapters/SQuAD

An individual single-dataset adapter (e.g. SQuAD):

python run.py \
    --eval_on BioASQ DROP DuoRC RACE RelationExtraction TextbookQA \
    --load_adapters_from single_dataset_adapters/ \
    --adapter \
    --adapter_name SQuAD \
    --output_dir output/zero_shot/single_dataset_adapters/SQuAD

An ensemble of MADE adapters. This will run a forward pass through every adapter in parallel.

python run.py \
    --eval_on BioASQ DROP DuoRC RACE RelationExtraction TextbookQA \
    --load_from made_transformer \
    --load_adapters_from made_tuned_adapters \
    --adapter_names SQuAD HotpotQA TriviaQA SearchQA NewsQA NaturalQuestions \
    --made \
    --parallel_adapters  \
    --output_dir output/zero_shot/made_ensemble

Averaging the parameters of the MADE adapters:

python run.py \
    --eval_on BioASQ DROP DuoRC RACE RelationExtraction TextbookQA \
    --load_from made_transformer \
    --load_adapters_from made_tuned_adapters \
    --adapter_names SQuAD HotpotQA TriviaQA SearchQA NewsQA NaturalQuestions \
    --adapter \
    --average_adapters  \
    --output_dir output/zero_shot/made_avg

Running UnifiedQA:

python run.py \
    --eval_on BioASQ DROP DuoRC RACE RelationExtraction TextbookQA \
    --seq2seq \
    --model_name_or_path allenai/unifiedqa-t5-base \
    --output_dir output/zero_shot/unifiedqa

Transfer

The scripts in scripts/transfer/ provide examples of how to run the few-shot transfer learning experiments described in the paper. For example, the following command will repeat for three random seeds: (1) sample 64 training examples from BioASQ, (2) calculate the zero-shot loss of all the MADE adapters on the training examples, (3) average the adapter parameters in proportion to zero-shot loss, (4) hold out 32 training examples for validation data, (5) train the adapter until performance stops improving on the 32 validation examples, and (6) evaluate the adapter on the full development set.

python run.py \
    --train_on BioASQ \
    --adapter_names SQuAD HotpotQA TriviaQA NewsQA SearchQA NaturalQuestions \
    --made \
    --parallel_made \
    --weighted_average_before_training \
    --adapter_learning_rate 1e-5 \
    --steps 200 \
    --patience 10 \
    --eval_before_training \
    --full_eval_after_training \
    --max_train_examples 64 \
    --few_shot \
    --criterion "loss" \
    --negative_examples \
    --save \
    --seeds 7 19 29 \
    --load_from "made_transformer" \
    --load_adapters_from "made_tuned_adapters" \
    --name "transfer/made_preaverage/BioASQ/64"

Bugs or questions?

If you have any questions related to the code or the paper, feel free to email Dan Friedman ([email protected]). If you encounter any problems when using the code, or want to report a bug, you can open an issue. Please try to specify the problem with details so we can help you better and quicker!

Citation

@inproceedings{friedman2021single,
   title={Single-dataset Experts for Multi-dataset QA},
   author={Friedman, Dan and Dodge, Ben and Chen, Danqi},
   booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
   year={2021}
}
Owner
Princeton Natural Language Processing
Princeton Natural Language Processing
Jittor implementation of Recursive-NeRF: An Efficient and Dynamically Growing NeRF

Recursive-NeRF: An Efficient and Dynamically Growing NeRF This is a Jittor implementation of Recursive-NeRF: An Efficient and Dynamically Growing NeRF

33 Nov 30, 2022
Implementation of Artificial Neural Network Algorithm

Artificial Neural Network This repository contain implementation of Artificial Neural Network Algorithm in several programming languanges and framewor

Resha Dwika Hefni Al-Fahsi 1 Sep 14, 2022
Unofficial Tensorflow-Keras implementation of Fastformer based on paper [Fastformer: Additive Attention Can Be All You Need](https://arxiv.org/abs/2108.09084).

Fastformer-Keras Unofficial Tensorflow-Keras implementation of Fastformer based on paper Fastformer: Additive Attention Can Be All You Need. Tensorflo

Yam Peleg 10 Jan 30, 2022
The fastai book, published as Jupyter Notebooks

English / Spanish / Korean / Chinese / Bengali / Indonesian The fastai book These notebooks cover an introduction to deep learning, fastai, and PyTorc

fast.ai 17k Jan 07, 2023
PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

PyTorch implementation of NeurIPS 2021 paper: "CoFiNet: Reliable Coarse-to-fine Correspondences for Robust Point Cloud Registration"

76 Jan 03, 2023
PyTorch implementation of adversarial patch

adversarial-patch PyTorch implementation of adversarial patch This is an implementation of the Adversarial Patch paper. Not official and likely to hav

Jamie Hayes 172 Nov 29, 2022
A python tutorial on bayesian modeling techniques (PyMC3)

Bayesian Modelling in Python Welcome to "Bayesian Modelling in Python" - a tutorial for those interested in learning how to apply bayesian modelling t

Mark Regan 2.4k Jan 06, 2023
Milano is a tool for automating hyper-parameters search for your models on a backend of your choice.

Milano (This is a research project, not an official NVIDIA product.) Documentation https://nvidia.github.io/Milano Milano (Machine learning autotuner

NVIDIA Corporation 147 Dec 17, 2022
Codes for paper "KNAS: Green Neural Architecture Search"

KNAS Codes for paper "KNAS: Green Neural Architecture Search" KNAS is a green (energy-efficient) Neural Architecture Search (NAS) approach. It contain

90 Dec 22, 2022
Notepy is a full-featured Notepad Python app

Notepy A full featured python text-editor Notable features Autocompletion for parenthesis and quote Auto identation Syntax highlighting Compile and ru

Mirko Rovere 11 Sep 28, 2022
General Vision Benchmark, a project from OpenGVLab

Introduction We build GV-B(General Vision Benchmark) on Classification, Detection, Segmentation and Depth Estimation including 26 datasets for model e

174 Dec 27, 2022
Compute execution plan: A DAG representation of work that you want to get done. Individual nodes of the DAG could be simple python or shell tasks or complex deeply nested parallel branches or embedded DAGs themselves.

Hello from magnus Magnus provides four capabilities for data teams: Compute execution plan: A DAG representation of work that you want to get done. In

12 Feb 08, 2022
Validated, scalable, community developed variant calling, RNA-seq and small RNA analysis

Validated, scalable, community developed variant calling, RNA-seq and small RNA analysis. You write a high level configuration file specifying your in

Blue Collar Bioinformatics 917 Jan 03, 2023
Location-Sensitive Visual Recognition with Cross-IOU Loss

The trained models are temporarily unavailable, but you can train the code using reasonable computational resource. Location-Sensitive Visual Recognit

Kaiwen Duan 146 Dec 25, 2022
Forest R-CNN: Large-Vocabulary Long-Tailed Object Detection and Instance Segmentation (ACM MM 2020)

Forest R-CNN: Large-Vocabulary Long-Tailed Object Detection and Instance Segmentation (ACM MM 2020) Official implementation of: Forest R-CNN: Large-Vo

Jialian Wu 54 Jan 06, 2023
Weakly Supervised Learning of Rigid 3D Scene Flow

Weakly Supervised Learning of Rigid 3D Scene Flow This repository provides code and data to train and evaluate a weakly supervised method for rigid 3D

Zan Gojcic 124 Dec 27, 2022
Adaptive FNO transformer - official Pytorch implementation

Adaptive Fourier Neural Operators: Efficient Token Mixers for Transformers This repository contains PyTorch implementation of the Adaptive Fourier Neu

NVIDIA Research Projects 77 Dec 29, 2022
Implementation of paper "Self-supervised Learning on Graphs:Deep Insights and New Directions"

SelfTask-GNN A PyTorch implementation of "Self-supervised Learning on Graphs: Deep Insights and New Directions". [paper] In this paper, we first deepe

Wei Jin 85 Oct 13, 2022
Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks

Self-supervised Point Cloud Prediction Using 3D Spatio-temporal Convolutional Networks This is a Pytorch-Lightning implementation of the paper "Self-s

Photogrammetry & Robotics Bonn 111 Dec 06, 2022
Unified file system operation experience for different backend

megfile - Megvii FILE library Docs: http://megvii-research.github.io/megfile megfile provides a silky operation experience with different backends (cu

MEGVII Research 76 Dec 14, 2022