FactSumm: Factual Consistency Scorer for Abstractive Summarization

Overview

FactSumm: Factual Consistency Scorer for Abstractive Summarization

GitHub release Apache 2.0 Issues

FactSumm is a toolkit that scores Factualy Consistency for Abstract Summarization

Without fine-tuning, you can simply apply a variety of downstream tasks to both the source article and the generated abstractive summary

For example, by extracting fact triples from source articles and generated summaries, we can verify that generated summaries correctly reflect source-based facts ( See image above )

As you can guess, this PoC-ish project uses a lot of pre-trained modules that require super-duper computing resources

So don't blame me, just take it as a concept project 👀


Installation

FactSumm requires Java to be installed in your environment to use Stanford OpenIE. With Java and Python 3, you can install factsumm simply using pip:

pip install factsumm

Or you can install FactSumm from source repository:

git clone https://github.com/huffon/factsumm
cd factsumm
pip install .

Usage

>>> from factsumm import FactSumm
>>> factsumm = FactSumm()
>>> article = "Lionel Andrés Messi (born 24 June 1987) is an Argentine professional footballer who plays as a forward and captains both Spanish club Barcelona and the Argentina national team. Often considered as the best player in the world and widely regarded as one of the greatest players of all time, Messi has won a record six Ballon d'Or awards, a record six European Golden Shoes, and in 2020 was named to the Ballon d'Or Dream Team."
>>> summary = "Lionel Andrés Messi (born 24 Aug 1997) is an Spanish professional footballer who plays as a forward and captains both Spanish club Barcelona and the Spanish national team."
>>> factsumm(article, summary, verbose=True)
SOURCE Entities
1: [('Lionel Andrés Messi', 'PERSON'), ('24 June 1987', 'DATE'), ('Argentine', 'NORP'), ('Spanish', 'NORP'), ('Barcelona',
'GPE'), ('Argentina', 'GPE')]
2: [('one', 'CARDINAL'), ('Messi', 'PERSON'), ('six', 'CARDINAL'), ('European Golden Shoes', 'WORK_OF_ART'), ('2020', 'DATE'),
("the Ballon d'Or Dream Team", 'ORG')]

SUMMARY Entities
1: [('Lionel Andrés Messi', 'PERSON'), ('24 Aug 1997', 'DATE'), ('Spanish', 'NORP'), ('Barcelona', 'ORG')]

SOURCE Facts
('Lionel Andrés Messi', 'per:origin', 'Argentine')
('Spanish', 'per:date_of_birth', '24 June 1987')
('Spanish', 'org:top_members/employees', 'Lionel Andrés Messi')
('Spanish', 'org:members', 'Barcelona')
('Lionel Andrés Messi', 'per:employee_of', 'Barcelona')
('Lionel Andrés Messi', 'per:date_of_birth', '24 June 1987')
('Barcelona', 'org:top_members/employees', 'Lionel Andrés Messi')

SUMMARY Facts
('Lionel Andrés Messi', 'per:origin', 'Spanish')
('Lionel Andrés Messi', 'per:date_of_birth', '24 Aug 1997')
('Spanish', 'per:date_of_birth', '24 Aug 1997')
('Spanish', 'org:top_members/employees', 'Lionel Andrés Messi')
('Spanish', 'org:members', 'Barcelona')
('Lionel Andrés Messi', 'per:employee_of', 'Barcelona')
('Barcelona', 'org:top_members/employees', 'Lionel Andrés Messi')

COMMON Facts
('Spanish', 'org:top_members/employees', 'Lionel Andrés Messi')
('Spanish', 'org:members', 'Barcelona')
('Lionel Andrés Messi', 'per:employee_of', 'Barcelona')
('Barcelona', 'org:top_members/employees', 'Lionel Andrés Messi')

DIFF Facts
('Lionel Andrés Messi', 'per:origin', 'Spanish')
('Lionel Andrés Messi', 'per:date_of_birth', '24 Aug 1997')
('Spanish', 'per:date_of_birth', '24 Aug 1997')

Fact Score: 0.5714285714285714

Answers based on SOURCE (Questions are generated from Summary)
[Q] Who is the captain of the Spanish national team?    [Pred] <unanswerable>
[Q] When was Lionel Andrés Messi born?  [Pred] 24 June 1987
[Q] Lionel Andrés Messi is a professional footballer of what nationality?       [Pred] Argentine
[Q] Lionel Messi is a captain of which Spanish club?    [Pred] Barcelona

Answers based on SUMMARY (Questions are generated from Summary)
[Q] Who is the captain of the Spanish national team?    [Pred] Lionel Andrés Messi
[Q] When was Lionel Andrés Messi born?  [Pred] 24 Aug 1997
[Q] Lionel Andrés Messi is a professional footballer of what nationality?       [Pred] Spanish
[Q] Lionel Messi is a captain of which Spanish club?    [Pred] Barcelona

QAGS Score: 0.3333333333333333

SOURCE Triples
('Messi', 'is', 'Argentine')
('Messi', 'is', 'professional')

SUMMARY Triples
('Messi', 'is', 'Spanish')
('Messi', 'is', 'professional')

Triple Score: 0.5

Avg. ROUGE-1: 0.4415584415584415
Avg. ROUGE-2: 0.3287671232876712
Avg. ROUGE-L: 0.4415584415584415

Sub-modules

From here, you can find various way to score Factual Consistency level with Unsupervised methods


Triple-based Module ( closed-scheme )

>>> from factsumm import FactSumm
>>> factsumm = FactSumm()
>>> factsumm.extract_facts(article, summary, verbose=True)
SOURCE Entities
1: [('Lionel Andrés Messi', 'PERSON'), ('24 June 1987', 'DATE'), ('Argentine', 'NORP'), ('Spanish', 'NORP'), ('Barcelona',
'GPE'), ('Argentina', 'GPE')]
2: [('one', 'CARDINAL'), ('Messi', 'PERSON'), ('six', 'CARDINAL'), ('European Golden Shoes', 'WORK_OF_ART'), ('2020', 'DATE'),
("the Ballon d'Or Dream Team", 'ORG')]

SUMMARY Entities
1: [('Lionel Andrés Messi', 'PERSON'), ('24 Aug 1997', 'DATE'), ('Spanish', 'NORP'), ('Barcelona', 'ORG')]

SOURCE Facts
('Lionel Andrés Messi', 'per:origin', 'Argentine')
('Spanish', 'per:date_of_birth', '24 June 1987')
('Spanish', 'org:top_members/employees', 'Lionel Andrés Messi')
('Spanish', 'org:members', 'Barcelona')
('Lionel Andrés Messi', 'per:employee_of', 'Barcelona')
('Lionel Andrés Messi', 'per:date_of_birth', '24 June 1987')
('Barcelona', 'org:top_members/employees', 'Lionel Andrés Messi')

SUMMARY Facts
('Lionel Andrés Messi', 'per:origin', 'Spanish')
('Lionel Andrés Messi', 'per:date_of_birth', '24 Aug 1997')
('Spanish', 'per:date_of_birth', '24 Aug 1997')
('Spanish', 'org:top_members/employees', 'Lionel Andrés Messi')
('Spanish', 'org:members', 'Barcelona')
('Lionel Andrés Messi', 'per:employee_of', 'Barcelona')
('Barcelona', 'org:top_members/employees', 'Lionel Andrés Messi')

COMMON Facts
('Spanish', 'org:top_members/employees', 'Lionel Andrés Messi')
('Spanish', 'org:members', 'Barcelona')
('Lionel Andrés Messi', 'per:employee_of', 'Barcelona')
('Barcelona', 'org:top_members/employees', 'Lionel Andrés Messi')

DIFF Facts
('Lionel Andrés Messi', 'per:origin', 'Spanish')
('Lionel Andrés Messi', 'per:date_of_birth', '24 Aug 1997')
('Spanish', 'per:date_of_birth', '24 Aug 1997')

Fact Score: 0.5714285714285714

The triple-based module counts the overlap of fact triples between the generated summary and the source document.


QA-based Module

If you ask questions about the summary and the source document, you will get a similar answer if the summary realistically matches the source document

>>> from factsumm import FactSumm
>>> factsumm = FactSumm()
>>> factsumm.extract_qas(article, summary, verbose=True)
Answers based on SOURCE (Questions are generated from Summary)
[Q] Who is the captain of the Spanish national team?    [Pred] <unanswerable>
[Q] When was Lionel Andrés Messi born?  [Pred] 24 June 1987
[Q] Lionel Andrés Messi is a professional footballer of what nationality?       [Pred] Argentine
[Q] Lionel Messi is a captain of which Spanish club?    [Pred] Barcelona

Answers based on SUMMARY (Questions are generated from Summary)
[Q] Who is the captain of the Spanish national team?    [Pred] Lionel Andrés Messi
[Q] When was Lionel Andrés Messi born?  [Pred] 24 Aug 1997
[Q] Lionel Andrés Messi is a professional footballer of what nationality?       [Pred] Spanish
[Q] Lionel Messi is a captain of which Spanish club?    [Pred] Barcelona

QAGS Score: 0.3333333333333333

OpenIE-based Module ( open-scheme )

>>> from factsumm import FactSumm
>>> factsumm = FactSumm()
>>> factsumm.extract_triples(article, summary, verbose=True)
SOURCE Triples
('Messi', 'is', 'Argentine')
('Messi', 'is', 'professional')

SUMMARY Triples
('Messi', 'is', 'Spanish')
('Messi', 'is', 'professional')

Triple Score: 0.5

Stanford OpenIE can extract relationships from raw strings. But it's important to note that it's based on the open scheme, not the closed scheme (like Triple-based Module).

For example, from "Obama was born in Hawaii", OpenIE extracts (Obama, born in Hawaii). However, from "Hawaii is the birthplace of Obama", it extracts (Hawaii, is the birthplace of, Obama). In common sense, the triples extracted from the two sentences should be identical, but OpenIE can't recognize that they are the same since it is based on an open scheme.

So the score for this module may be unstable.


ROUGE-based Module

>>> from factsumm import FactSumm
>>> factsumm = FactSumm()
>>> factsumm.calculate_rouge(article, summary)
Avg. ROUGE-1: 0.4415584415584415
Avg. ROUGE-2: 0.3287671232876712
Avg. ROUGE-L: 0.4415584415584415

Simple but effective word-level overlap ROUGE score


Citation

If you apply this library to any project, please cite:

@misc{factsumm,
  author       = {Heo, Hoon},
  title        = {FactSumm: Factual Consistency Scorer for Abstractive Summarization},
  howpublished = {\url{https://github.com/Huffon/factsumm}},
  year         = {2021},
}

References

You might also like...
Summarization, translation, sentiment-analysis, text-generation and more at blazing speed using a T5 version implemented in ONNX.
Summarization, translation, sentiment-analysis, text-generation and more at blazing speed using a T5 version implemented in ONNX.

Summarization, translation, Q&A, text generation and more at blazing speed using a T5 version implemented in ONNX. This package is still in alpha stag

Package for controllable summarization

summarizers summarizers is package for controllable summarization based CTRLsum. currently, we only supports English. It doesn't work in other languag

The guide to tackle with the Text Summarization
The guide to tackle with the Text Summarization

The guide to tackle with the Text Summarization

code for modular summarization work published in ACL2021 by Krishna et al

This repository contains the code for running modular summarization pipelines as described in the publication Krishna K, Khosla K, Bigham J, Lipton ZC

code for modular summarization work published in ACL2021 by Krishna et al

This repository contains the code for running modular summarization pipelines as described in the publication Krishna K, Khosla K, Bigham J, Lipton ZC

Codes for processing meeting summarization datasets AMI and ICSI.
Codes for processing meeting summarization datasets AMI and ICSI.

Meeting Summarization Dataset Meeting plays an essential part in our daily life, which allows us to share information and collaborate with others. Wit

 SummerTime - Text Summarization Toolkit for Non-experts
SummerTime - Text Summarization Toolkit for Non-experts

A library to help users choose appropriate summarization tools based on their specific tasks or needs. Includes models, evaluation metrics, and datasets.

Korean extractive summarization. 2021 AI 텍스트 요약 온라인 해커톤 화성갈끄니까팀 코드
Korean extractive summarization. 2021 AI 텍스트 요약 온라인 해커톤 화성갈끄니까팀 코드

korean extractive summarization 2021 AI 텍스트 요약 온라인 해커톤 화성갈끄니까팀 코드 Leaderboard Notice Text Summarization with Pretrained Encoders에 나오는 bertsumext모델(ext

Deploying a Text Summarization NLP use case on Docker Container Utilizing Nvidia GPU
Deploying a Text Summarization NLP use case on Docker Container Utilizing Nvidia GPU

GPU Docker NLP Application Deployment Deploying a Text Summarization NLP use case on Docker Container Utilizing Nvidia GPU, to setup the enviroment on

Comments
  • BUG: AttributeError: 'str' object has no attribute 'generate'

    BUG: AttributeError: 'str' object has no attribute 'generate'

    when I use the example in README to gain qags score, there has a problem:

    AttributeError Traceback (most recent call last) in () ----> 1 factsumm.extract_qas(article, summary, verbose=True)

    ~/Desktop/factsumm-master/factsumm/factsumm.py in extract_qas(self, source, summary, source_ents, summary_ents, verbose, device) 292 summary_ents = self.ner(summary_lines) 293 --> 294 summary_qas = self.qg(summary_lines, summary_ents) 295 296 source_answers = self.qa(source, summary_qas)

    ~/Desktop/factsumm-master/factsumm/utils/module_question.py in generate_question(sentences, total_entities) 55 ).to(device) 56 ---> 57 outputs = model.generate(**tokens, max_length=64) 58 59 question = tokenizer.decode(outputs[0])

    AttributeError: 'str' object has no attribute 'generate'

    hope you can help me to solve this problem. Thanks!!

    opened by victory-h 0
  • IndexError: index out of range in self

    IndexError: index out of range in self

    In example, when I extend the length of the article and summary , I get this error.

    /opt/anaconda3/envs/LDA0115/lib/python3.6/site-packages/torch/nn/modules/sparse.py in forward(self, input) 124 return F.embedding( 125 input, self.weight, self.padding_idx, self.max_norm, --> 126 self.norm_type, self.scale_grad_by_freq, self.sparse) 127 128 def extra_repr(self) -> str:

    /opt/anaconda3/envs/LDA0115/lib/python3.6/site-packages/torch/nn/functional.py in embedding(input, weight, padding_idx, max_norm, norm_type, scale_grad_by_freq, sparse) 1850 # remove once script supports set_grad_enabled 1851 no_grad_embedding_renorm(weight, input, max_norm, norm_type) -> 1852 return torch.embedding(weight, input, padding_idx, scale_grad_by_freq, sparse) 1853 1854

    IndexError: index out of range in self

    opened by victory-h 0
  • Hit Error while using this toolkits

    Hit Error while using this toolkits

    Loading Named Entity Recognition Pipeline... Loading Relation Extraction Pipeline... Fact Score: 0.5714285714285714 Loading Question Generation Pipeline... Loading Question Answering Pipeline... Traceback (most recent call last): File "testcase.py", line 5, in print(factsumm(article, summary, verbose=False)) File "/usr/local/lib/python3.8/dist-packages/factsumm/init.py", line 366, in call qags_score = self.extract_qas( File "/usr/local/lib/python3.8/dist-packages/factsumm/init.py", line 263, in extract_qas source_answers = self.qa(source, summary_qas) File "/usr/local/lib/python3.8/dist-packages/factsumm/utils/level_sentence.py", line 100, in answer_question pred = qa( File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/question_answering.py", line 248, in call return super().call(examples[0], **kwargs) File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py", line 915, in call return self.run_single(inputs, preprocess_params, forward_params, postprocess_params) File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/base.py", line 923, in run_single outputs = self.postprocess(model_outputs, **postprocess_params) File "/usr/local/lib/python3.8/dist-packages/transformers/pipelines/question_answering.py", line 409, in postprocess min_null_score = min(min_null_score, (start_[0] * end_[0]).item()) ValueError: can only convert an array of size 1 to a Python scalar

    while using provided example in README, I meet the Error above ( I use pip install to install this packet and create the python file, copy the example code and run ) pip uninstall and pip reinstall doesn`t help QAQ any suggestion are greatly appreciated!

    opened by Ricardokevins 0
Releases(0.1.2)
  • 0.1.2(May 13, 2021)

    Update BERTScore based Module (See Sec 4.1 from https://arxiv.org/pdf/2005.03754.pdf)

    >>> factsumm = FactSumm()
    >>> factsumm.calculate_bert_score(article, summary)
    BERTScore Score
    Precision: 0.9151781797409058
    Recall: 0.9141832590103149
    F1: 0.9150083661079407
    
    Source code(tar.gz)
    Source code(zip)
  • 0.1.1(May 12, 2021)

    Currently FactSumm supports the following methods:

    • NER + RE based Triple Module
    • QG + QA based Module
    • OpenIE based Triple Module
    • ROUGE based Module
    Source code(tar.gz)
    Source code(zip)
Owner
devfon
Who wants to change the world slowly
devfon
a chinese segment base on crf

Genius Genius是一个开源的python中文分词组件,采用 CRF(Conditional Random Field)条件随机场算法。 Feature 支持python2.x、python3.x以及pypy2.x。 支持简单的pinyin分词 支持用户自定义break 支持用户自定义合并词

duanhongyi 237 Nov 04, 2022
official ( API ) for the zAmericanEnglish app in [ Google play ] and [ App store ]

official ( API ) for the zAmericanEnglish app in [ Google play ] and [ App store ]

Plugin 3 Jan 12, 2022
Natural Language Processing Best Practices & Examples

NLP Best Practices In recent years, natural language processing (NLP) has seen quick growth in quality and usability, and this has helped to drive bus

Microsoft 6.1k Dec 31, 2022
The projects lets you extract glossary words and their definitions from a given piece of text automatically using NLP techniques

Unsupervised technique to Glossary and Definition Extraction Code Files GPT2-DefinitionModel.ipynb - GPT-2 model for definition generation. Data_Gener

Prakhar Mishra 28 May 25, 2021
📜 GPT-2 Rhyming Limerick and Haiku models using data augmentation

Well-formed Limericks and Haikus with GPT2 📜 GPT-2 Rhyming Limerick and Haiku models using data augmentation In collaboration with Matthew Korahais &

Bardia Shahrestani 2 May 26, 2022
Bnagla hand written document digiiztion

Bnagla hand written document digiiztion This repo addresses the problem of digiizing hand written documents in Bangla. Documents have definite fields

Mushfiqur Rahman 1 Dec 10, 2021
The entmax mapping and its loss, a family of sparse softmax alternatives.

entmax This package provides a pytorch implementation of entmax and entmax losses: a sparse family of probability mappings and corresponding loss func

DeepSPIN 330 Dec 22, 2022
Snowball compiler and stemming algorithms

Snowball is a small string processing language for creating stemming algorithms for use in Information Retrieval, plus a collection of stemming algori

Snowball Stemming language and algorithms 613 Jan 07, 2023
Abhijith Neil Abraham 2 Nov 05, 2021
Semantic search for quotes.

squote A semantic search engine that takes some input text and returns some (questionably) relevant (questionably) famous quotes. Built with: bert-as-

cjwallace 11 Jun 25, 2022
A BERT-based reverse dictionary of Korean proverbs

Wisdomify A BERT-based reverse-dictionary of Korean proverbs. 김유빈 : 모델링 / 데이터 수집 / 프로젝트 설계 / back-end 김종윤 : 데이터 수집 / 프로젝트 설계 / front-end / back-end 임용

94 Dec 08, 2022
nlp基础任务

NLP算法 说明 此算法仓库包括文本分类、序列标注、关系抽取、文本匹配、文本相似度匹配这五个主流NLP任务,涉及到22个相关的模型算法。 框架结构 文件结构 all_models ├── Base_line │   ├── __init__.py │   ├── base_data_process.

zuxinqi 23 Sep 22, 2022
An official implementation for "CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval"

The implementation of paper CLIP4Clip: An Empirical Study of CLIP for End to End Video Clip Retrieval. CLIP4Clip is a video-text retrieval model based

ArrowLuo 456 Jan 06, 2023
NLPretext packages in a unique library all the text preprocessing functions you need to ease your NLP project.

NLPretext packages in a unique library all the text preprocessing functions you need to ease your NLP project.

Artefact 114 Dec 15, 2022
[EMNLP 2021] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.

[EMNLP 2021] Mirror-BERT: Converting Pretrained Language Models to universal text encoders without labels.

Cambridge Language Technology Lab 61 Dec 10, 2022
Analyse japanese ebooks using MeCab to determine the difficulty level for japanese learners

japanese-ebook-analysis This aim of this project is to make analysing the contents of a japanese ebook easy and streamline the process for non-technic

Christoffer Aakre 14 Jul 23, 2022
Pretty-doc - Composable text objects with python

pretty-doc from __future__ import annotations from dataclasses import dataclass

Taine Zhao 2 Jan 17, 2022
One Stop Anomaly Shop: Anomaly detection using two-phase approach: (a) pre-labeling using statistics, Natural Language Processing and static rules; (b) anomaly scoring using supervised and unsupervised machine learning.

One Stop Anomaly Shop (OSAS) Quick start guide Step 1: Get/build the docker image Option 1: Use precompiled image (might not reflect latest changes):

Adobe, Inc. 148 Dec 26, 2022
ACL22 paper: Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost

Imputing Out-of-Vocabulary Embeddings with LOVE Makes Language Models Robust with Little Cost LOVE is accpeted by ACL22 main conference as a long pape

Lihu Chen 32 Jan 03, 2023