EMNLP 2021: Single-dataset Experts for Multi-dataset Question-Answering

Related tags

Deep LearningMADE
Overview

MADE (Multi-Adapter Dataset Experts)

This repository contains the implementation of MADE (Multi-adapter dataset experts), which is described in the paper Single-dataset Experts for Multi-dataset Question Answering.

MADE combines a shared Transformer with a collection of adapters that are specialized to different reading comprehension datasets. See our paper for details.

Quick links

Requirements

The code uses Python 3.8, PyTorch, and the adapter-transformers library. Install the requirements with:

pip install -r requirements.txt

Download the data

You can download the datasets used in the paper from the repository for the MRQA 2019 shared task.

The datasets should be stored in directories ending with train or dev. For example, download the in-domain training datasets to a directory called data/train/ and download the in-domain development datasets to data/dev/.

For zero-shot and few-shot experiments, download the MRQA out-of-domain development datasets to a separate directory and split them into training and development splits using scripts/split_datasets.py. For example, download the datasets to data/transfer/ and run

ls data/transfer/* -1 | xargs -l python scripts/split_datasets.py

Use the default random seed (13) to replicate the splits used in the paper.

Download the trained models

The trained models are stored on the HuggingFace model hub at this URL: https://huggingface.co/princeton-nlp/MADE. All of the models are based on the RoBERTa-base model. They are:

To download just the MADE Transformer and adapters:

mkdir made_transformer
wget https://huggingface.co/princeton-nlp/MADE/resolve/main/made_transformer/model.pt -O made_transformer/model.pt

mkdir made_tuned_adapters
for d in SQuAD HotpotQA TriviaQA SearchQA NewsQA NaturalQuestions; do
  mkdir "made_tuned_adapters/${d}"
  wget "https://huggingface.co/princeton-nlp/MADE/resolve/main/made_tuned_adapters/${d}/model.pt" -O "made_tuned_adapters/${d}/model.pt"
done;

You can download all of the models at once by cloning the repository (first installing Git LFS):

git lfs install
git clone https://huggingface.co/princeton-nlp/MADE
mv MADE models

Run the model

The scripts in scripts/train/ and scripts/transfer/ provide examples of how to run the code. For more details, see the descriptions of the command line flags in run.py.

Train

You can use the scripts in scripts/train/ to train models on the MRQA datasets. For example, to train MADE:

./scripts/train/made_training.sh

And to tune the MADE adapters separately on individual datasets:

for d in SQuAD HotpotQA TriviaQA SearchQA NewsQA NaturalQuestions; do
  ./scripts/train/made_adapter_tuning.sh $d
done;

See run.py for details about the command line arguments.

Evaluate

A single fine-tuned model:

python run.py \
    --eval_on BioASQ DROP DuoRC RACE RelationExtraction TextbookQA \
    --load_from multi_dataset_ft \
    --output_dir output/zero_shot/multi_dataset_ft

An individual MADE adapter (e.g. SQuAD):

python run.py \
    --eval_on BioASQ DROP DuoRC RACE RelationExtraction TextbookQA \
    --load_from made_transformer \
    --load_adapters_from made_tuned_adapters \
    --adapter \
    --adapter_name SQuAD \
    --output_dir output/zero_shot/made_tuned_adapters/SQuAD

An individual single-dataset adapter (e.g. SQuAD):

python run.py \
    --eval_on BioASQ DROP DuoRC RACE RelationExtraction TextbookQA \
    --load_adapters_from single_dataset_adapters/ \
    --adapter \
    --adapter_name SQuAD \
    --output_dir output/zero_shot/single_dataset_adapters/SQuAD

An ensemble of MADE adapters. This will run a forward pass through every adapter in parallel.

python run.py \
    --eval_on BioASQ DROP DuoRC RACE RelationExtraction TextbookQA \
    --load_from made_transformer \
    --load_adapters_from made_tuned_adapters \
    --adapter_names SQuAD HotpotQA TriviaQA SearchQA NewsQA NaturalQuestions \
    --made \
    --parallel_adapters  \
    --output_dir output/zero_shot/made_ensemble

Averaging the parameters of the MADE adapters:

python run.py \
    --eval_on BioASQ DROP DuoRC RACE RelationExtraction TextbookQA \
    --load_from made_transformer \
    --load_adapters_from made_tuned_adapters \
    --adapter_names SQuAD HotpotQA TriviaQA SearchQA NewsQA NaturalQuestions \
    --adapter \
    --average_adapters  \
    --output_dir output/zero_shot/made_avg

Running UnifiedQA:

python run.py \
    --eval_on BioASQ DROP DuoRC RACE RelationExtraction TextbookQA \
    --seq2seq \
    --model_name_or_path allenai/unifiedqa-t5-base \
    --output_dir output/zero_shot/unifiedqa

Transfer

The scripts in scripts/transfer/ provide examples of how to run the few-shot transfer learning experiments described in the paper. For example, the following command will repeat for three random seeds: (1) sample 64 training examples from BioASQ, (2) calculate the zero-shot loss of all the MADE adapters on the training examples, (3) average the adapter parameters in proportion to zero-shot loss, (4) hold out 32 training examples for validation data, (5) train the adapter until performance stops improving on the 32 validation examples, and (6) evaluate the adapter on the full development set.

python run.py \
    --train_on BioASQ \
    --adapter_names SQuAD HotpotQA TriviaQA NewsQA SearchQA NaturalQuestions \
    --made \
    --parallel_made \
    --weighted_average_before_training \
    --adapter_learning_rate 1e-5 \
    --steps 200 \
    --patience 10 \
    --eval_before_training \
    --full_eval_after_training \
    --max_train_examples 64 \
    --few_shot \
    --criterion "loss" \
    --negative_examples \
    --save \
    --seeds 7 19 29 \
    --load_from "made_transformer" \
    --load_adapters_from "made_tuned_adapters" \
    --name "transfer/made_preaverage/BioASQ/64"

Bugs or questions?

If you have any questions related to the code or the paper, feel free to email Dan Friedman ([email protected]). If you encounter any problems when using the code, or want to report a bug, you can open an issue. Please try to specify the problem with details so we can help you better and quicker!

Citation

@inproceedings{friedman2021single,
   title={Single-dataset Experts for Multi-dataset QA},
   author={Friedman, Dan and Dodge, Ben and Chen, Danqi},
   booktitle={Empirical Methods in Natural Language Processing (EMNLP)},
   year={2021}
}
Owner
Princeton Natural Language Processing
Princeton Natural Language Processing
Generating Images with Recurrent Adversarial Networks

Generating Images with Recurrent Adversarial Networks Python (Theano) implementation of Generating Images with Recurrent Adversarial Networks code pro

Daniel Jiwoong Im 121 Sep 08, 2022
[AAAI 2022] Separate Contrastive Learning for Organs-at-Risk and Gross-Tumor-Volume Segmentation with Limited Annotation

A paper Introduction This is an official release of the paper Separate Contrastive Learning for Organs-at-Risk and Gross-Tumor-Volume Segmentation wit

Jiacheng Wang 14 Dec 08, 2022
[RSS 2021] An End-to-End Differentiable Framework for Contact-Aware Robot Design

DiffHand This repository contains the implementation for the paper An End-to-End Differentiable Framework for Contact-Aware Robot Design (RSS 2021). I

Jie Xu 60 Jan 04, 2023
Official implementation for “Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior”

HEP Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior Implementation Python3 PyTorch=1.0 NVIDIA GPU+CUDA Training process The

FengZhang 34 Dec 04, 2022
The 2nd place solution of 2021 google landmark retrieval on kaggle.

Google_Landmark_Retrieval_2021_2nd_Place_Solution The 2nd place solution of 2021 google landmark retrieval on kaggle. Environment We use cuda 11.1/pyt

229 Dec 13, 2022
A symbolic-model-guided fuzzer for TLS

tlspuffin TLS Protocol Under FuzzINg A symbolic-model-guided fuzzer for TLS Master Thesis | Thesis Presentation | Documentation Disclaimer: The term "

69 Dec 20, 2022
Optical machine for senses sensing using speckle and deep learning

# Senses-speckle [Remote Photonic Detection of Human Senses Using Secondary Speckle Patterns](https://doi.org/10.21203/rs.3.rs-724587/v1) paper Python

Zeev Kalyuzhner 0 Sep 26, 2021
Pcos-prediction - Predicts the likelihood of Polycystic Ovary Syndrome based on patient attributes and symptoms

PCOS Prediction 🥼 Predicts the likelihood of Polycystic Ovary Syndrome based on

Samantha Van Seters 1 Jan 10, 2022
ObjectDrawer-ToolBox: a graphical image annotation tool to generate ground plane masks for a 3D object reconstruction system

ObjectDrawer-ToolBox is a graphical image annotation tool to generate ground plane masks for a 3D object reconstruction system, Object Drawer.

77 Jan 05, 2023
DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers

DALL-Eval: Probing the Reasoning Skills and Social Biases of Text-to-Image Generative Transformers Authors: Jaemin Cho, Abhay Zala, and Mohit Bansal (

Jaemin Cho 98 Dec 15, 2022
(CVPR 2022 - oral) Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry

Multi-View Depth Estimation by Fusing Single-View Depth Probability with Multi-View Geometry Official implementation of the paper Multi-View Depth Est

Bae, Gwangbin 138 Dec 28, 2022
Near-Optimal Sparse Allreduce for Distributed Deep Learning (published in PPoPP'22)

Near-Optimal Sparse Allreduce for Distributed Deep Learning (published in PPoPP'22) Ok-Topk is a scheme for distributed training with sparse gradients

Shigang Li 9 Oct 29, 2022
Differentiable scientific computing library

xitorch: differentiable scientific computing library xitorch is a PyTorch-based library of differentiable functions and functionals that can be widely

98 Dec 26, 2022
SSPNet: Scale Selection Pyramid Network for Tiny Person Detection from UAV Images.

SSPNet: Scale Selection Pyramid Network for Tiny Person Detection from UAV Images (IEEE GRSL 2021) Code (based on mmdetection) for SSPNet: Scale Selec

Italian Cannon 37 Dec 28, 2022
The implementation of the algorithm in the paper "Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data" published in ICML 2020.

DS3L This is the code for paper "Safe Deep Semi-Supervised Learning for Unseen-Class Unlabeled Data" published in ICML 2020. Setups The code is implem

Guolz 36 Oct 19, 2022
Off-policy continuous control in PyTorch, with RDPG, RTD3 & RSAC

arXiv technical report soon available. we are updating the readme to be as comprehensive as possible Please ask any questions in Issues, thanks. Intro

Zhihan 31 Dec 30, 2022
Unofficial PyTorch Implementation for HifiFace (https://arxiv.org/abs/2106.09965)

HifiFace — Unofficial Pytorch Implementation Image source: HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping (figure 1, pg. 1)

MINDs Lab 218 Jan 04, 2023
Source code of generalized shuffled linear regression

Generalized-Shuffled-Linear-Regression Code for the ICCV 2021 paper: Generalized Shuffled Linear Regression. Authors: Feiran Li, Kent Fujiwara, Fumio

FEI 7 Oct 26, 2022
Implementation of ReSeg using PyTorch

Implementation of ReSeg using PyTorch ReSeg: A Recurrent Neural Network-based Model for Semantic Segmentation Pascal-Part Annotations Pascal VOC 2010

Onur Kaplan 46 Nov 23, 2022
Source code of the paper Meta-learning with an Adaptive Task Scheduler.

ATS About Source code of the paper Meta-learning with an Adaptive Task Scheduler. If you find this repository useful in your research, please cite the

Huaxiu Yao 16 Dec 26, 2022