Benchmark for evaluating open-ended generation

Overview

OpenMEVA

Contributed by Jian Guan, Zhexin Zhang. Thank Jiaxin Wen for DeBugging.

OpenMEVA is a benchmark for evaluating open-ended story generation metrics (Please refer to the Paper List for more information about Open-eNded Language Generation tasks) described in the paper: OpenMEVA: A Benchmark for Evaluating Open-ended Story Generation Metrics (ACL 2021 Long Paper). Besides, OpenMEVA also provides an open-source and extensible toolkit for metric implementation, evaluation, comparison, and analysis, as well as data perturbation techniques to help generate large numbers of customized test cases. We expect the toolkit to empower fast development of automatic metrics.

Contents

Introduction for Language Generation Evaluation

Since human evaluation is time-consuming, expensive, and difficult to reproduce, the community commonly uses automatic metrics for evaluation. We roughly divide existing metrics as follows:

  • Previous studies in conditional language generation tasks (e.g., machine translation) have developed several successful referenced metrics, which roughly quantify the lexical overlap (e.g., BLEU) or semantic entailment (e.g., BertScore) between a generated sample and the reference.
  • Referenced metrics correlate poorly with human judgments when evaluating open-ended language generation. Specifically, a generated sample can be reasonable if it is coherent to the given input, and self-consistent within its own context but not necessarily being similar to the reference in literal or semantics. To address the one-to-many issue, unreferenced metrics (e.g., UNION) are proposed to measure the quality of a generated sample without any reference.
  • Besides, some researchers propose to combine referenced and unreferenced metrics, i.e. hybrid metrics, which usually average two individual metric scores (e.g. RUBER) or learn from human preference (e.g., ADEM). However, ADEM is reported to lack generalization and robustness with limited human annotation.

The existing generation models are still far from human ability to generate reasonable texts, particularly for open-ended language generation tasks such as story generation. One important factor that hinders the research is the lack of powerful metrics for measuring generation quality. Therefore, we propose OpenMEVA as the standard paradigm for measuring progress of metrics.

Install

Clone the repository from our github page (don't forget to star us!)

git clone https://github.com/thu-coai/OpenMEVA.git

Then install all the requirements:

pip install -r requirements.txt

Then install the package with

python setup.py install

If you also want to modify the code, run this:

python setup.py develop

Toolkit

I. Metrics Interface

1. Metric List

We publish the standard implementation for the following metrics:

2. Usage

It is handy to construct a metric object and use it to evaluate given examples:

from eva.bleu import BLEU
metric = BLEU()

# for more information about the metric
print(metric.info)

# data is a list of dictionary [{"context": ..., "candidate":..., "reference": ...}]
print(metric.compute(data))

We present a python file test.py as an instruction to access the API.

These metrics are not exhaustive, it is a starting point for further metric research. We welcome any pull request for other metrics (requiring implementation of only three methods including __init__, info, compute).

3. Training Learnable Metrics

Execute the following command for training learnable metrics:

cd ./eva/model

# training language model for computing forward perplexity
bash ./run_language_modeling.sh

# training the unreferenced model for computing RUBER (RNN version)
bash ./run_ruber_unrefer.sh

# training the unreferenced model for computing RUBER (BERT version)
bash ./run_ruber_unrefer_bert.sh

# training the model for computing UNION
bash ./run_union.sh

II. Evaluating Human Scores

The python file test.py also includes detailed instruction to access the API for evaluating human scores.

1. Constructing

from eva.heva import Heva

# list of all possible human scores (int/float/str).
all_possible_score_list = [1,2,3,4,5]

# construct an object for following evaluation
heva = Heva(all_possible_score_list)

2. Consistency of human scores

# list of human score list, each row includes all the human scores for an example
human_score_list = [[1,3,2], [1,3,3], [2,3,1], ...]

print(heva.consistency(human_score_list))
# {"Fleiss's kappa": ..., "ICC correlation": ..., "Kendall-w":..., "krippendorff's alpha":...}
# the results includes correlation and p-value for significance test.

3. Mean Test for scores of examples from different source

# list of metric scores (float)
metric_score_1, metric_score_2 = [3.2, 2.4, 3.1,...], [3.5, 1.2, 2.3, ...]

# T-test for the means of two independent samples of scores.
print(heva.mean_test(metric_score_1, metric_score_2))
# {"t-statistic": ..., "p-value": ...}

4. Distribution of human scores

# list of human scores (float)
human_score = [2.0, 4.2, 1.2, 4.9, 2.6, 3.1, 4.0, 1.5,...]

# path for saving the figure of distribution
figure_path = "./figure"

# indicating the source of the annotated examples. default: ""
model_name = "gpt"

# plot the figure of distribution of human scores
heva.save_distribution_figure(score=human_score, save_path=figure_path, model_name=model_name, ymin=0, ymax=50)

5. Correlation between human and metric scores

# list of human scores (float)
human_score = [2.0, 4.2, 1.2, 4.9, 2.6, 3.1, 4.0, 1.5,...]

# list of metric scores (float)
metric_score = [3.2, 2.4, 3.1, 3.5, 1.2, 2.3, 3.5, 1.1,...]

# computing correlation
print(heva.correlation(metric_score, human_score))

# path for saving the figure of distribution
figure_path = "./figure"

# indicating the source of the metric scores. default: ""
metric_name = "bleu"

# plot the figure of metric score vs. human scores
heva.save_correlation_figure(human_score, metric_score, save_path=figure_path, metric_name=metric_name)

III. Perturbation Techniques

1. Perturbation List

We provide perturbation techniques in following aspects to create large scale test cases for evaluating comprehensive capabilities of metrics:

  • Lexical repetition

    • Repeating n-grams or sentences:

      He stepped on the stage and stepped on the stage.
  • Semantic repetition:

    • Repeating sentences with paraphrases by back translation:

      He has been from Chicago to Florida. He moved to Florida from Chicago.

  • Character behavior:

    • Reordering the subject and object of a sentence:

      Lars looked at the girl with desire.→ the girl looked at Lars with desire.
    • Substituting the personal pronouns referring to other characters:

      her mother took them to ... → their mother took her to ...
  • Common sense:

    • Substituting the head or tail entities in a commonsense triple of ConcepNet:

      Martha puts her dinner into theoven. She lays down fora quick nap. She oversleeps and runs into the kitchen (→ garden) to take out her burnt dinne.
  • Consistency:

    • Inserting or Deleting negated words or prefixes:

      She had (→ did not have) money to get vaccinated. She had a flu shot ...
      She agreed (→ disagreed) to get vaccinated.
    • Substituting words with antonyms:

      She is happy (→ upset) that she had a great time ...
  • Coherence:

    • Substituting words, phrases or sentences:

      Christmas was very soon. Kelly wanted to put up the Christmas tree. (→ Eventually it went into remission.)
  • Causal Relationship:

    • Reordering the cause and effect:

      the sky was clear so he could see clearly the boat. → he could see clearly the boat so the sky was clear.
    • Substituting the causality-related words randomly:

      the sky was clear so (→ because) he could see clearly the boat.
  • Temporal Relationship:

    • Reordering two sequential events:

      I eat one bite. Then I was no longer hungry.I was no longer hungry. Then I eat one bite.
    • Substituting the time-related words:

      After (→ Before) eating one bite I was no longer hungry.
  • Synonym:

    • Substituting a word with its synonym:

      I just purchased (→ bought) my uniforms.
  • Paraphrase:

    • Substituting a sentence with its paraphrase by back translation:

      Her dog doesn't shiver anymore.Her dog stops shaking.
  • Punctuation:

    • Inserting or Deleting inessential punctuation mark:

      Eventually,Eventually he became very hungry.
  • Contraction:

    • Contracting or Expanding contraction:

      I’ll (→ I will) have to keep waiting .
  • Typo:

    • Swapping two adjacent characters:

      that orange (→ ornage) broke her nose.
    • Repeating or Deleting a character:

      that orange (→ orannge) broke her nose.

2. Usage

It is handy to construct a perturbation object and use it to perturb given examples:

from eva.perturb.perturb import *
custom_name = "story"
method = add_typos(custom_name)

# data is a list of dictionary [{"id":0, "ipt": ..., "truth":...}, ...]
print(method.construct(data))
# the perturbed examples can be found under the directory "custom_name"

We present a python file test_perturb.py as an instruction to access the API.

You can download dependent files for some perturbation techniques by executing the following command:

cd ./eva/perturb
bash ./download.sh

You can also download them by THUCloud or Google Drive.

These perturbation techniques are not exhaustive, it is a starting point for further evaluation research. We welcome any pull request for other perturbation techniques (requiring implementation of only two methods including __init__, construct).

Note 📑 We adopt uda for back translation. We provide an example eva/perturb/back_trans_data/story_bt.json to indicate the format of the back translation result. And you can download the results for ROCStories and WritingPrompts by THUCloud or Google Drive.

Benchmark

I. Datasets

1. Machine-Generated Stories (MAGS) with manual annotation

We provide annotated stories from ROCStories (ROC) and WritingPrompts (WP). Some statistics are as follows:

Boxplot of annotation scores for each story source (Left: ROC, Right: WP):

2. Auto-Constructed Stories (ACTS)

We create large-scale test examples based on ROC and WP by aforementioned perturbation techniques. ACTS includes examples for different test types, i.e., discrimination test and invariance test.

  • The discrimination test requires metrics to distinguish human-written positive examples from negative ones. Wecreate each negative example by applying pertur-bation within an individual aspect. Besides, we also select different positive examples targeted for corresponding aspects. Below table shows the numbers of positive and negative examples in different aspects.

  • The invariance test expect the metric judgments to remain the same when we apply rationality-preserving perturbations, which means almost no influence on the quality of examples. The original examples can be either the human-written stories or the negative examples created in the discrimination test. Below table shows the numbers of original (also perturbed) positive and negative examples in different aspects.

3. Download & Data Instruction

You can download the whole dataset by THUCloud or Google Drive.

├── data
   └── `mags_data`
       ├── `mags_roc.json`	# sampled stories and corresponding human annotation.   
       ├── `mags_wp.json`		# sampled stories and corresponding human annotation.       
   └── `acts_data`
       ├── `roc`
              └── `roc_train_ipt.txt`	# input for training set
              └── `roc_train_opt.txt`	# output for training set
              └── `roc_valid_ipt.txt`	# input for validation set
              └── `roc_valid_opt.txt`	# output for validation set
              └── `roc_test_ipt.txt`	# input for test set
              └── `roc_test_opt.txt`	# output for test set
              └── `discrimination_test`                        
                 ├── `roc_lexical_rept.txt`
                 ├── `roc_lexical_rept_perturb.txt`										
                 ├── `roc_semantic_rept.txt`
                 ├── `roc_semantic_rept_perturb.txt`
                 ├── `roc_character.txt`
                 ├── `roc_character_perturb.txt`
                 ├── `roc_commonsense.txt`
                 ├── `roc_commonsense_perturb.txt`												
                 ├── `roc_coherence.txt`
                 ├── `roc_coherence_perturb.txt`
                 ├── `roc_consistency.txt`
                 ├── `roc_consistency_perturb.txt`								
                 ├── `roc_cause.txt`
                 ├── `roc_cause_perturb.txt`       										
                 ├── `roc_time.txt`
                 ├── `roc_time_perturb.txt`                    
              └── `invariance_test`
                 ├── `roc_synonym_substitute_perturb.txt`
                 ├── `roc_semantic_substitute_perturb.txt`
                 ├── `roc_contraction_perturb.txt`
                 ├── `roc_delete_punct_perturb.txt`
                 ├── `roc_typos_perturb.txt`
                 ├── `roc_negative_sample.txt`	# sampled negative samples from the discrimination test.	
                 ├── `roc_negative_sample_synonym_substitute_perturb.txt`
                 ├── `roc_negative_sample_semantic_substitute_perturb.txt`
                 ├── `roc_negative_sample_contraction_perturb.txt`
                 ├── `roc_negative_sample_delete_punct_perturb.txt`
                 ├── `roc_negative_sample_typos_perturb.txt`
       ├── `wp`
              └── ...

II. Tasks

OpenMEVA includes a suite of tasks to test comprehensive capabilities of metrics:

1. Correlation with human scores (based on MAGS)

2. Generalization across generation models and dataset (for learnable metrics, based on MAGS)

3. Judgment in general linguistic features (based on the discrimination test set of ACTS)

4. Robustness to rationality-preserving perturbations (based on the invariance test set of ACTS)

Note: The smaller absolute value of correlation is the better.

5. Fast Test

You can test these capabilities of new metrics by following command:

cd ./benchmark

# test correlation with human scores and generalization
python ./corr_gen.py

# test judgment
python ./judge.py

# test robustness
python ./robust.py

We take BLEU and Forward Perplexity as examples in the python files. You can test your own metrics by minor modification.

How to Cite

@misc{guan2021openmeva,
      title={OpenMEVA: A Benchmark for Evaluating Open-ended Story Generation Metrics}, 
      author={Jian Guan and Zhexin Zhang and Zhuoer Feng and Zitao Liu and Wenbiao Ding and Xiaoxi Mao and Changjie Fan and Minlie Huang},
      year={2021},
      eprint={2105.08920},
      archivePrefix={arXiv},
      primaryClass={cs.CL}
}

It's our honor to help you better explore language generation evaluation with our toolkit and benchmark.

Owner
Conversational AI groups from Tsinghua University
A simple rest api that classifies pneumonia infection weather it is Normal, Pneumonia Virus or Pneumonia Bacteria from a chest-x-ray image.

This is a simple rest api that classifies pneumonia infection weather it is Normal, Pneumonia Virus or Pneumonia Bacteria from a chest-x-ray image.

crispengari 3 Jan 08, 2022
Attendance Monitoring with Face Recognition using Python

Attendance Monitoring with Face Recognition using Python A python GUI integrated attendance system using face recognition to take attendance. In this

Vaibhav Rajput 2 Jun 21, 2022
Domain Generalization with MixStyle, ICLR'21.

MixStyle This repo contains the code of our ICLR'21 paper, "Domain Generalization with MixStyle". The OpenReview link is https://openreview.net/forum?

Kaiyang 208 Dec 28, 2022
Pytorch implementation of the paper SPICE: Semantic Pseudo-labeling for Image Clustering

SPICE: Semantic Pseudo-labeling for Image Clustering By Chuang Niu and Ge Wang This is a Pytorch implementation of the paper. (In updating) SOTA on 5

Chuang Niu 154 Dec 15, 2022
Code for paper: Group-CAM: Group Score-Weighted Visual Explanations for Deep Convolutional Networks

Group-CAM By Zhang, Qinglong and Rao, Lu and Yang, Yubin [State Key Laboratory for Novel Software Technology at Nanjing University] This repo is the o

zhql 98 Nov 16, 2022
Bridging Vision and Language Model

BriVL BriVL (Bridging Vision and Language Model) 是首个中文通用图文多模态大规模预训练模型。BriVL模型在图文检索任务上有着优异的效果,超过了同期其他常见的多模态预训练模型(例如UNITER、CLIP)。 BriVL论文:WenLan: Bridgi

235 Dec 27, 2022
Multiview Dataset Toolkit

Multiview Dataset Toolkit Using multi-view cameras is a natural way to obtain a complete point cloud. However, there is to date only one multi-view 3D

11 Dec 22, 2022
Code for the Paper: Conditional Variational Capsule Network for Open Set Recognition

Conditional Variational Capsule Network for Open Set Recognition This repository hosts the official code related to "Conditional Variational Capsule N

Guglielmo Camporese 35 Nov 21, 2022
A python module for scientific analysis of 3D objects based on VTK and Numpy

A lightweight and powerful python module for scientific analysis and visualization of 3d objects.

Marco Musy 1.5k Jan 06, 2023
Rename Images with Auto Generated Neural Image Captions

Recaption Images with Generated Neural Image Caption Example Usage: Commandline: Recaption all images from folder /home/feng/Downloads/images to folde

feng wang 3 May 01, 2022
Explore the Expression: Facial Expression Generation using Auxiliary Classifier Generative Adversarial Network

Explore the Expression: Facial Expression Generation using Auxiliary Classifier Generative Adversarial Network This is the official implementation of

azad 2 Jul 09, 2022
Space-event-trace - Tracing service for spaceteam events

space-event-trace Tracing service for TU Wien Spaceteam events. This service is

TU Wien Space Team 2 Jan 04, 2022
This is the code related to "Sparse-to-dense Feature Matching: Intra and Inter domain Cross-modal Learning in Domain Adaptation for 3D Semantic Segmentation" (ICCV 2021).

Sparse-to-dense Feature Matching: Intra and Inter domain Cross-modal Learning in Domain Adaptation for 3D Semantic Segmentation This is the code relat

39 Sep 23, 2022
AI-Fitness-Tracker - AI Fitness Tracker With Python

AI-Fitness-Tracker We have build a AI based Fitness Tracker using OpenCV and Pyt

Sharvari Mangale 5 Feb 09, 2022
Visual Tracking by TridenAlign and Context Embedding

Visual Tracking by TridentAlign and Context Embedding (TACT) Test code for "Visual Tracking by TridentAlign and Context Embedding" Janghoon Choi, Juns

Janghoon Choi 32 Aug 25, 2021
[IROS2021] NYU-VPR: Long-Term Visual Place Recognition Benchmark with View Direction and Data Anonymization Influences

NYU-VPR This repository provides the experiment code for the paper Long-Term Visual Place Recognition Benchmark with View Direction and Data Anonymiza

Automation and Intelligence for Civil Engineering (AI4CE) Lab @ NYU 22 Sep 28, 2022
In this project we predict the forest cover type using the cartographic variables in the training/test datasets.

Kaggle Competition: Forest Cover Type Prediction In this project we predict the forest cover type (the predominant kind of tree cover) using the carto

Marianne Joy Leano 1 Mar 15, 2022
Source code for PairNorm (ICLR 2020)

PairNorm Official pytorch source code for PairNorm paper (ICLR 2020) This code requires pytorch_geometric=1.3.2 usage For SGC, we use original PairNo

62 Dec 08, 2022
Food Drinks and groceries Images Multi Lingual (FooDI-ML) dataset.

Food Drinks and groceries Images Multi Lingual (FooDI-ML) dataset.

41 Jan 04, 2023
PyTorch implementation of the paper: Label Noise Transition Matrix Estimation for Tasks with Lower-Quality Features

Label Noise Transition Matrix Estimation for Tasks with Lower-Quality Features Estimate the noise transition matrix with f-mutual information. This co

<a href=[email protected]"> 1 Jun 05, 2022