Data & Code for ACCENTOR Adding Chit-Chat to Enhance Task-Oriented Dialogues

Related tags

Deep Learningaccentor
Overview

ACCENTOR: Adding Chit-Chat to Enhance Task-Oriented Dialogues

Overview

ACCENTOR consists of the human-annotated chit-chat additions to the 23.8K dialogues from Schema Guided Dialogue (SGD) and MultiWOZ 2.1, allowing researchers to study contexutal addition of chit-chat utterances for virtual assistants, to make task-oriented dialogues more engaging and social.

We also provide three new models for ACCENTOR explicitly trained to predict user goals and to generate contextually relevant chit-chat responses.

Automatic and human evaluations show that, compared with the state of-the-art task-oriented baseline, our models can code-switch between task and chit-chat to be more engaging, interesting, knowledgeable, and humanlike, while maintaining competitive task performance.

For more details, please refer to this paper.

Data

  • v1.0/candidates-{sgd,multiwoz}.json: Annotated chit-chat candidates. The format is as follows.
{
 "dialogue 1 / id": [
  [
   dialogue 1 / candidate 1 / turn id,
   dialogue 1 / candidate 1 / position,
   dialogue 1 / candidate 1 / candidate,
   dialogue 1 / candidate 1 / label,
   dialogue 1 / candidate 1 / justification
  ],
  [
   dialogue 1 / candidate 2 / turn id,
   ...
  ],
  ...
 ],
 "dialogue 2 / id": [
  ...
 ],
 ...
}
  • Folder v1.0/accentor-sgd: The augmented SGD dataset. The format follows the original SGD dataset, with two additional keys (i.e., beginning and end) that store lists of (candidate, label, justification) tuples.

    • The folder is generated by v1.0/accentor-sgd.py (with v1.0/candidates-sgd.json and the original SGD dataset as input). Usage: python3 v1.0/accentor-sgd.py --help.
  • v1.0/accentor-multiwoz-1k.json: 1K augmented MultiWOZ 2.1 dialogues. The format follows the original MultiWOZ dataset, with two additional keys (i.e., beginning and end) that store lists of (candidate, label, justification) tuples.

    • The file is generated by v1.0/accentor-multiwoz.py (with v1.0/candidates-multiwoz.json and the original MultiWOZ 2.1 dataset as input). Usage: python3 v1.0/accentor-multiwoz.py --help.

Baseline Models

Preparation

  • Dependencies: ParlAI (af12799a) and Transformers (2.11.0)

  • Run the following commands to prepare the data for model training and the off-the-shelf models (i.e., a task-oriented dialogue model and a chit-chat model) for Arranger and Rewriter.

cp -r ./v1.0/accentor-sgd .

python3 gen_delex.py

python3 gen_parlai_data.py

parlai train_model -t fromfile:parlaiformat --fromfile_datapath ./parlai --fromfile-datatype-extension true  -m transformer/generator --init-model zoo:tutorial_transformer_generator/model --dict-file zoo:tutorial_transformer_generator/model.dict --embedding-size 512 --n-layers 8 --ffn-size 2048 --dropout 0.1 --n-heads 16 --learn-positional-embeddings True --n-positions 512 --variant xlm --activation gelu --skip-generation True --fp16 True --text-truncate 512 --label-truncate 128 --dict-tokenizer bpe --dict-lower True -lr 1e-06 --optimizer adamax --lr-scheduler reduceonplateau --gradient-clip 0.1 -veps 0.25 --betas 0.9,0.999 --update-freq 1 --attention-dropout 0.0 --relu-dropout 0.0 --skip-generation True -vp 15 -stim 60 -vme 20000 -bs 16 -vmt ppl -vmm min --save-after-valid True --model-file ./train_90M

parlai interactive -mf ./train_90M < lm.input.dev.cc.txt > lm.output.dev.cc.txt

parlai interactive -mf ./train_90M < lm.input.test.cc.txt > lm.output.test.cc.txt

python3 run_language_modeling.py --output_dir=output_gpt2_10epoch_1e-3_fp16 --model_type=gpt2 --model_name_or_path=gpt2 --do_train --train_data_file=lm.input.train.txt --do_eval  --eval_data_file=lm.input.dev.txt --per_device_train_batch_size 2 --gradient_accumulation_steps 18 --num_train_epochs 10 --learning_rate 1e-3 --fp16 --overwrite_output_dir

python3 run_generation.py --input lm.input.dev.eval.txt --output dev.inference.gpt2_10epoch_1e-3_fp16.json --model_name_or_path ./output_gpt2_10epoch_1e-3_fp16 --eos_token_id 50262

python3 run_generation.py --input lm.input.test.eval.txt --output test.inference.gpt2_10epoch_1e-3_fp16.json --model_name_or_path ./output_gpt2_10epoch_1e-3_fp16 --eos_token_id 50262

SimpleTOD+

  • Dependency: Transformers (2.11.0)
python3 run_language_modeling.py --output_dir=output_both_gpt2_10epoch_1e-3_fp16 --model_type=gpt2 --model_name_or_path=gpt2 --do_train --train_data_file=lm.input.train.both.txt --do_eval  --eval_data_file=lm.input.dev.both.txt --per_device_train_batch_size 2 --gradient_accumulation_steps 18 --num_train_epochs 10 --learning_rate 1e-3 --fp16 --overwrite_output_dir

python3 run_generation.py --input lm.input.dev.eval.txt --output dev.inference.both_gpt2_10epoch_1e-3_fp16.json --model_name_or_path ./output_both_gpt2_10epoch_1e-3_fp16 --eos_token_id 50262

python3 run_generation.py --input lm.input.test.eval.txt --output test.inference.both_gpt2_10epoch_1e-3_fp16.json --model_name_or_path ./output_both_gpt2_10epoch_1e-3_fp16 --eos_token_id 50262

Arranger

  • Dependency: Transformers (2.2.0)
python3 gen_arranger_input.py

python3 run_multiple_choice.py --model_type roberta --task_name acc --model_name_or_path roberta-base --do_train --do_eval --do_test --do_lower_case --data_dir . --learning_rate 2e-5 --num_train_epochs 3 --max_seq_length 512 --output_dir acc_arranger_roberta_base_3epoch --per_gpu_eval_batch_size=16 --per_gpu_train_batch_size=1 --gradient_accumulation_steps 24 --overwrite_output --save_steps 10000

python3 gen_arranger_output.py

Rewriter

  • Dependency: Transformers 2.11.0
python3 gen_rewriter_data.py

python3 run_language_modeling.py --output_dir=output_ff_gpt2_10epoch_1e-3_fp16 --model_type=gpt2 --model_name_or_path=gpt2 --do_train --train_data_file=lm.input.train.ff.txt  --do_eval --eval_data_file=lm.input.dev.ff.txt --per_device_train_batch_size 2 --gradient_accumulation_steps 18 --num_train_epochs 10 --learning_rate 1e-3 --fp16 --overwrite_output_dir

python3 run_generation.py --input lm.input.dev.eval.ff.txt --output dev.inference.ff_gpt2_10epoch_1e-3_fp16.json --model_name_or_path ./output_ff_gpt2_10epoch_1e-3_fp16 --eos_token_id 50262

python3 run_generation.py --input lm.input.test.eval.ff.txt --output test.inference.ff_gpt2_10epoch_1e-3_fp16.json --model_name_or_path ./output_ff_gpt2_10epoch_1e-3_fp16 --eos_token_id 50262

Evaluation

  • Dependency: the official evaluation script of SGD

  • Pass the output inference files (i.e., {dev,test}.inference*.json) to gen_predict.py to obtain act-slot F1 and BLEU-4 scores. For example,

python3 gen_predict.py --inference test.inference.both_gpt2_10epoch_1e-3_fp16.json --split test
  • The above command will also generate a folder (named ./prediction/ by default), which can be passed to the official evaluation script of SGD to obtain the joint goal accuracy and average accuracy. For example,
python3 -m schema_guided_dst.evaluate --dstc8_data_dir ./simpletod/ --prediction_dir ./prediction/test/ --eval_set test --output_metric_file simpletod+_test_result.json

Citations

If you want to publish experimental results with our datasets or use the baseline models, please cite the following article (pdf):

@inproceedings{sun2020adding,
  title={Adding Chit-Chat to Enhance Task-Oriented Dialogues},
  author={Sun, Kai and Moon, Seungwhan and Crook, Paul and Roller, Stephen and Silvert, Becka and Liu, Bing and Wang, Zhiguang and Liu, Honglei and Cho, Eunjoon and Cardie, Claire},
  booktitle={Proceedings of the NAACL-HLT},
  year={2021},
  url={https://arxiv.org/abs/2010.12757}
}

License

ACCENTOR is released under CC-BY-SA-4.0, see LICENSE for details.

Owner
Facebook Research
Facebook Research
The implementation of CVPR2021 paper Temporal Query Networks for Fine-grained Video Understanding, by Chuhan Zhang, Ankush Gupta and Andrew Zisserman.

Temporal Query Networks for Fine-grained Video Understanding đź“‹ This repository contains the implementation of CVPR2021 paper Temporal_Query_Networks

55 Dec 21, 2022
The aim of the game, as in the original one, is to find a specific image from a group of different images of a person's face

GUESS WHO Main Links: [Github] [App] Related Links: [CLIP] [Celeba] The aim of the game, as in the original one, is to find a specific image from a gr

Arnau - DIMAI 3 Jan 04, 2022
Source code for "Roto-translated Local Coordinate Framesfor Interacting Dynamical Systems"

Roto-translated Local Coordinate Frames for Interacting Dynamical Systems Source code for Roto-translated Local Coordinate Frames for Interacting Dyna

Miltiadis Kofinas 19 Nov 27, 2022
[CVPR 2020] Interpreting the Latent Space of GANs for Semantic Face Editing

InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing Figure: High-quality facial attributes editing results with InterFaceGA

GenForce: May Generative Force Be with You 1.3k Dec 29, 2022
A library of scripts that interact with the PythonTurtle module to create games, drawings, and more

TurtleLib TurtleLib is a library of scripts that interact with the PythonTurtle module to create games, drawings, and more! Using the Scripts Copy or

1 Jan 15, 2022
Facial Expression Detection In The Realtime

The human's facial expressions is very important to detect thier emotions and sentiment. It can be very efficient to use to make our computers make interviews. Furthermore, we have robots now can det

Adel El-Nabarawy 4 Mar 01, 2022
Txt2Xml tool will help you convert from txt COCO format to VOC xml format in Object Detection Problem.

TXT 2 XML All codes assume running from root directory. Please update the sys path at the beginning of the codes before running. Over View Txt2Xml too

Nguyễn Trường Lâu 4 Nov 24, 2022
This is an example of a reproducible modelling project

An example of a reproducible modelling project What are we doing? This example was created for the 2021 fall lecture series of Stanford's Center for O

Armin Thomas 2 Oct 26, 2021
Code for the paper "Training GANs with Stronger Augmentations via Contrastive Discriminator" (ICLR 2021)

Training GANs with Stronger Augmentations via Contrastive Discriminator (ICLR 2021) This repository contains the code for reproducing the paper: Train

Jongheon Jeong 174 Dec 29, 2022
Trains an agent with stochastic policy gradient ascent to solve the Lunar Lander challenge from OpenAI

Introduction This script trains an agent with stochastic policy gradient ascent to solve the Lunar Lander challenge from OpenAI. In order to run this

Momin Haider 0 Jan 02, 2022
📝 Wrapper library for text generation / language models at char and word level with RNN in TensorFlow

tensorlm Generate Shakespeare poems with 4 lines of code. Installation tensorlm is written in / for Python 3.4+ and TensorFlow 1.1+ pip3 install tenso

Kilian Batzner 63 May 22, 2021
Deep Reinforcement Learning by using an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO)

V-MPO Simple code to demonstrate Deep Reinforcement Learning by using an on-policy adaptation of Maximum a Posteriori Policy Optimization (MPO) in Pyt

Nugroho Dewantoro 9 Jun 06, 2022
Original Implementation of Prompt Tuning from Lester, et al, 2021

Prompt Tuning This is the code to reproduce the experiments from the EMNLP 2021 paper "The Power of Scale for Parameter-Efficient Prompt Tuning" (Lest

Google Research 282 Dec 28, 2022
mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms.

mbrl-lib is a toolbox for facilitating development of Model-Based Reinforcement Learning algorithms. It provides easily interchangeable modeling and planning components, and a set of utility function

Facebook Research 724 Jan 04, 2023
A highly modular PyTorch framework with a focus on Neural Architecture Search (NAS).

UniNAS A highly modular PyTorch framework with a focus on Neural Architecture Search (NAS). under development (which happens mostly on our internal Gi

Cognitive Systems Research Group 19 Nov 23, 2022
TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation, CVPR2022

TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentation Paper Links: TopFormer: Token Pyramid Transformer for Mobile Semantic Segmentati

Hust Visual Learning Team 253 Dec 21, 2022
Code of the paper "Multi-Task Meta-Learning Modification with Stochastic Approximation".

Multi-Task Meta-Learning Modification with Stochastic Approximation This repository contains the code for the paper "Multi-Task Meta-Learning Modifica

Andrew 3 Jan 05, 2022
The GitHub repository for the paper: “Time Series is a Special Sequence: Forecasting with Sample Convolution and Interaction“.

SCINet This is the original PyTorch implementation of the following work: Time Series is a Special Sequence: Forecasting with Sample Convolution and I

386 Jan 01, 2023
Fbone (Flask bone) is a Flask (Python microframework) starter/template/bootstrap/boilerplate application.

Fbone (Flask bone) is a Flask (Python microframework) starter/template/bootstrap/boilerplate application.

Wilson 1.7k Dec 30, 2022
A python script to convert images to animated sus among us crewmate twerk jifs as seen on r/196

img_sussifier A python script to convert images to animated sus among us crewmate twerk jifs as seen on r/196 Examples How to use install python pip i

41 Sep 30, 2022