Source code for our paper "Empathetic Response Generation with State Management"

Overview

Source code for our paper "Empathetic Response Generation with State Management"

this repository is maintained by both Jun Gao and Yuhan Liu

Model Overview

model

Environment Requirement

  • pytorch >= 1.4
  • sklearn
  • nltk
  • numpy
  • bert-score

Dataset

you can directly use the processed dataset located in data/empathetic:

├── data
│   ├── empathetic
│   │   ├── parsed_emotion_Ekman_intent_test.json
│   │   ├── parsed_emotion_Ekman_intent_train.json
│   │   ├── parsed_emotion_Ekman_intent_valid.json
│   │   ├── emotion_intent_trans.mat
│   │   ├── goEmotion_emotion_trans.mat

Or you want to reproduce the data annotated with goEmotion emotion classifier and empathetic intent classifier, you can run the command:

  • convert raw csv empathetic dialogue data into json format. (origin dataset link: EmpatheticDialogues)

    bash preprocess_raw.sh
  • train emotion classfier with goEmotion dataset and annotate (origin dataset link: goEmotion). Here $BERT_DIR is your pretrained BERT model directory which includes vocab.txt, config.json and pytorch_model.bin, here we simply use bert-base-en from Hugginface

    bash ./bash/emotion_annotate.sh  $BERT_DIR 32 0.00005 16 3 1024 2 0.1
  • train intent classfier with empathetic intent dataset and annotate (origin dataset link: Empathetic_Intent)

    bash ./bash/intent_annotate.sh  $BERT_DIR 32 0.00005 16 3 1024 2 0.1
  • build prior emotion-emotion and emotion-intent transition matrix

    bash ./bash/build_transition_mat.sh

Train

For training the LM-based model, you need to download bert-base-en and gpt2-small from Hugginface first, then run the following command. Here $GPT_DIR and $BERT_DIR are the downloaded model directory:

bash ./bash/train_LM.sh --gpt_path $GPT_DIR --bert_path $BERT_DIR --gpu_id 2 --epoch 5 --lr_NLU 0.00003 --lr_NLG 0.00008 --bsz_NLU 16 --bsz_NLG 16

for example:

bash ./bash/train_LM.sh --gpt_path /home/liuyuhan/datasets/gpt2-small --bert_path /home/liuyuhan/datasets/bert-base-en bert-base-en --gpu_id 2 --epoch 5 --lr_NLU 0.00003 --lr_NLG 0.00008 --bsz_NLU 16 --bsz_NLG 16

For training the Trs-based model, we use glove.6B.300d as the pretrained word embeddings. You can run the following command to train model. Here $GLOVE is the glove embedding txt file.

bash ./bash/train_Trs.sh --gpu_id 2 --epoch 15 --lr_NLU 0.00007 --lr_NLG 0.0015 --bsz_NLU 16 --bsz_NLG 16 --glove $GLOVE

for example:

bash ./bash/train_Trs.sh --gpu_id 2 --epoch 15 --lr_NLU 0.00007 --lr_NLG 0.0015 --bsz_NLU 16 --bsz_NLG 16 --glove /home/liuyuhan/datasets/glove/glove.6B.300d.txt

Evaluate

To generate the automatic metric results, firstly you need to make sure that bert-score is successfully installed. In our paper, we use roberta-large-en rescaled with baseline to calculate BERTScore. You can download roberta-large-en from Hugginface. For the rescaled_baseline file, we can download it from here and put it under the roberta-large-en model directory.

Then you can run the following command to get the result, here $hypothesis and $reference are the generated response file and ground-truth response file. $result is the output result file. $ROBERTA_DIR is the downloaded roberta-large-en model directory.

To evaluate LM-based model, the command is:

bash ./bash/eval.sh --hyp $hypothesis --ref ./data/empathetic/ref.txt --out $result --bert $ROBERTA_DIR --gpu_id 0 --mode LM

To evaluate Trs-based model, the command is:

bash ./bash/eval.sh --hyp $hypothesis --ref ./data/empathetic/ref_tokenize.txt --out $result --bert $ROBERTA_DIR --gpu_id 0 --mode Trs
Owner
Yuhan Liu
NLPer
Yuhan Liu
Code release for Local Light Field Fusion at SIGGRAPH 2019

Local Light Field Fusion Project | Video | Paper Tensorflow implementation for novel view synthesis from sparse input images. Local Light Field Fusion

1.1k Dec 27, 2022
PyTorch Language Model for 1-Billion Word (LM1B / GBW) Dataset

PyTorch Large-Scale Language Model A Large-Scale PyTorch Language Model trained on the 1-Billion Word (LM1B) / (GBW) dataset Latest Results 39.98 Perp

Ryan Spring 114 Nov 04, 2022
MIM: MIM Installs OpenMMLab Packages

MIM provides a unified API for launching and installing OpenMMLab projects and their extensions, and managing the OpenMMLab model zoo.

OpenMMLab 254 Jan 04, 2023
Very Deep Convolutional Networks for Large-Scale Image Recognition

pytorch-vgg Some scripts to convert the VGG-16 and VGG-19 models [1] from Caffe to PyTorch. The converted models can be used with the PyTorch model zo

Justin Johnson 217 Dec 05, 2022
Pose estimation for iOS and android using TensorFlow 2.0

💃 Mobile 2D Single Person (Or Your Own Object) Pose Estimation for TensorFlow 2.0 This repository is forked from edvardHua/PoseEstimationForMobile wh

tucan9389 165 Nov 16, 2022
ML models and internal tensors 3D visualizer

The free Zetane Viewer is a tool to help understand and accelerate discovery in machine learning and artificial neural networks. It can be used to ope

Zetane Systems 787 Dec 30, 2022
MVFNet: Multi-View Fusion Network for Efficient Video Recognition (AAAI 2021)

MVFNet: Multi-View Fusion Network for Efficient Video Recognition (AAAI 2021) Overview We release the code of the MVFNet (Multi-View Fusion Network).

2 Jan 29, 2022
Cancer Drug Response Prediction via a Hybrid Graph Convolutional Network

DeepCDR Cancer Drug Response Prediction via a Hybrid Graph Convolutional Network This work has been accepted to ECCB2020 and was also published in the

Qiao Liu 50 Dec 18, 2022
DziriBERT: a Pre-trained Language Model for the Algerian Dialect

DziriBERT DziriBERT is the first Transformer-based Language Model that has been pre-trained specifically for the Algerian Dialect. It handles Algerian

117 Jan 07, 2023
3DV 2021: Synergy between 3DMM and 3D Landmarks for Accurate 3D Facial Geometry

SynergyNet 3DV 2021: Synergy between 3DMM and 3D Landmarks for Accurate 3D Facial Geometry Cho-Ying Wu, Qiangeng Xu, Ulrich Neumann, CGIT Lab at Unive

Cho-Ying Wu 239 Jan 06, 2023
Learn the Deep Learning for Computer Vision in three steps: theory from base to SotA, code in PyTorch, and space-repetition with Anki

DeepCourse: Deep Learning for Computer Vision arthurdouillard.com/deepcourse/ This is a course I'm giving to the French engineering school EPITA each

Arthur Douillard 113 Nov 29, 2022
An implementation of "Learning human behaviors from motion capture by adversarial imitation"

Merel-MoCap-GAIL An implementation of Merel et al.'s paper on generative adversarial imitation learning (GAIL) using motion capture (MoCap) data: Lear

Yu-Wei Chao 34 Nov 12, 2022
[NIPS 2021] UOTA: Improving Self-supervised Learning with Automated Unsupervised Outlier Arbitration.

UOTA: Improving Self-supervised Learning with Automated Unsupervised Outlier Arbitration This repository is the official PyTorch implementation of UOT

6 Jun 29, 2022
[NeurIPS 2020] Semi-Supervision (Unlabeled Data) & Self-Supervision Improve Class-Imbalanced / Long-Tailed Learning

Rethinking the Value of Labels for Improving Class-Imbalanced Learning This repository contains the implementation code for paper: Rethinking the Valu

Yuzhe Yang 656 Dec 28, 2022
Automated Attendance Project Using Face Recognition

dependencies for project: cmake 3.22.1 dlib 19.22.1 face-recognition 1.3.0 openc

Rohail Taha 1 Jan 09, 2022
SSD: A Unified Framework for Self-Supervised Outlier Detection [ICLR 2021]

SSD: A Unified Framework for Self-Supervised Outlier Detection [ICLR 2021] Pdf: https://openreview.net/forum?id=v5gjXpmR8J Code for our ICLR 2021 pape

Princeton INSPIRE Research Group 113 Nov 27, 2022
Elegy is a framework-agnostic Trainer interface for the Jax ecosystem.

Elegy Elegy is a framework-agnostic Trainer interface for the Jax ecosystem. Main Features Easy-to-use: Elegy provides a Keras-like high-level API tha

435 Dec 30, 2022
This is a GUI interface which can process forest fire detection, smoke detection and fire segmentation

This is a GUI interface which can process forest fire detection, smoke detection and fire segmentation. Yolov5 is used to detect fire and smoke and unet is used to segment fire.

7 Jan 08, 2023
A lightweight library designed to accelerate the process of training PyTorch models by providing a minimal

A lightweight library designed to accelerate the process of training PyTorch models by providing a minimal, but extensible training loop which is flexible enough to handle the majority of use cases,

Chris Hughes 110 Dec 23, 2022
Self-Supervised Pillar Motion Learning for Autonomous Driving (CVPR 2021)

Self-Supervised Pillar Motion Learning for Autonomous Driving Chenxu Luo, Xiaodong Yang, Alan Yuille Self-Supervised Pillar Motion Learning for Autono

QCraft 101 Dec 05, 2022