Tracking Progress in Natural Language Processing

Overview

Tracking Progress in Natural Language Processing

Table of contents

English

Vietnamese

Hindi

Chinese

For more tasks, datasets and results in Chinese, check out the Chinese NLP website.

French

Russian

Spanish

Portuguese

Korean

Nepali

Bengali

Persian

Turkish

German

This document aims to track the progress in Natural Language Processing (NLP) and give an overview of the state-of-the-art (SOTA) across the most common NLP tasks and their corresponding datasets.

It aims to cover both traditional and core NLP tasks such as dependency parsing and part-of-speech tagging as well as more recent ones such as reading comprehension and natural language inference. The main objective is to provide the reader with a quick overview of benchmark datasets and the state-of-the-art for their task of interest, which serves as a stepping stone for further research. To this end, if there is a place where results for a task are already published and regularly maintained, such as a public leaderboard, the reader will be pointed there.

If you want to find this document again in the future, just go to nlpprogress.com or nlpsota.com in your browser.

Contributing

Guidelines

Results   Results reported in published papers are preferred; an exception may be made for influential preprints.

Datasets   Datasets should have been used for evaluation in at least one published paper besides the one that introduced the dataset.

Code   We recommend to add a link to an implementation if available. You can add a Code column (see below) to the table if it does not exist. In the Code column, indicate an official implementation with Official. If an unofficial implementation is available, use Link (see below). If no implementation is available, you can leave the cell empty.

Adding a new result

If you would like to add a new result, you can just click on the small edit button in the top-right corner of the file for the respective task (see below).

Click on the edit button to add a file

This allows you to edit the file in Markdown. Simply add a row to the corresponding table in the same format. Make sure that the table stays sorted (with the best result on top). After you've made your change, make sure that the table still looks ok by clicking on the "Preview changes" tab at the top of the page. If everything looks good, go to the bottom of the page, where you see the below form.

Fill out the file change information

Add a name for your proposed change, an optional description, indicate that you would like to "Create a new branch for this commit and start a pull request", and click on "Propose file change".

Adding a new dataset or task

For adding a new dataset or task, you can also follow the steps above. Alternatively, you can fork the repository. In both cases, follow the steps below:

  1. If your task is completely new, create a new file and link to it in the table of contents above.
  2. If not, add your task or dataset to the respective section of the corresponding file (in alphabetical order).
  3. Briefly describe the dataset/task and include relevant references.
  4. Describe the evaluation setting and evaluation metric.
  5. Show how an annotated example of the dataset/task looks like.
  6. Add a download link if available.
  7. Copy the below table and fill in at least two results (including the state-of-the-art) for your dataset/task (change Score to the metric of your dataset). If your dataset/task has multiple metrics, add them to the right of Score.
  8. Submit your change as a pull request.
Model Score Paper / Source Code

Wish list

These are tasks and datasets that are still missing:

  • Bilingual dictionary induction
  • Discourse parsing
  • Keyphrase extraction
  • Knowledge base population (KBP)
  • More dialogue tasks
  • Semi-supervised learning
  • Frame-semantic parsing (FrameNet full-sentence analysis)

Exporting into a structured format

You can extract all the data into a structured, machine-readable JSON format with parsed tasks, descriptions and SOTA tables.

The instructions are in structured/README.md.

Instructions for building the site locally

Instructions for building the website locally using Jekyll can be found here.

Comments
  • Conll-2003 uncomparable results

    Conll-2003 uncomparable results

    Because of the small size the training set of Conll-2003, some authors incorporated the development set as a part of training data after tuning the hyper-parameters. Consequently, not all results are directly comparable.

    Train+dev:

    Flair embeddings (Akbik et al., 2018) Peters et al. (2017) Yang et al. (2017)

    Maybe those results should be marked by an asterisk

    opened by ghaddarAbs 28
  • NLP Progress Graph

    NLP Progress Graph

    Hi Sebastian, loved your idea for this repo. I was thinking if we can have a graph, something like this

    showing progress of different tasks in NLP based on the updates to their markdown file. I have created a shell script which clones your repo into my local, counts the no of commit for different files and using python/pandas preprocess the result and create a bar chart out of it and uploads it to a free image uploading service.

    Currently, it shows count of all the commit for a specific file but if we can have a guideline for adding new results, fixing errors .. Maybe different identifiers

    Then we can count the no of times, a new result has been added to an NLP task. This can help in visualizing the NLP areas of most active/Improving research.

    Currently, the graph doesn't make much sense but over the time it will improve as we update with more results.

    Also, If you think something like this can benefit the community, i can create a cron job on my pc(i don't have a server) which will update the image url with the latest graph which you can show on the main page.

    opened by nirmalsinghania2008 16
  • YAML - pros and cons

    YAML - pros and cons

    I'd like to discuss here the pros and cons of using YAML going forward or whether we should stick with Markdown tables. Here are some pros and cons, mainly from @NirantK (in https://github.com/sebastianruder/NLP-progress/pull/116), @stared (in https://github.com/sebastianruder/NLP-progress/issues/43, https://github.com/sebastianruder/NLP-progress/pull/64) and myself.

    Pros:

    • Easier trend spotting in performance improvements
    • Easy to create plots and visualizations going forward
    • Data is separated from presentation

    Cons:

    • Hard for contributors, e.g. HTML omissions can't be spotted without setting up Jekyll locally
    • Github Repo becomes useless for readers, relying exclusively on nlpprogress.com
    • Many visualizations (e.g. bar charts) based on performance numbers are not more useful than the raw tables

    Other opinions are welcome.

    opened by sebastianruder 10
  • What about other languages?

    What about other languages?

    Thanks for this work!

    These pages seem to cover the progress only for English (well, except MT). Do you have plans to include other languages?

    One extreme example is POS tagging and dependency parsing. UD has 60+ languages :) For others, there should be very limited data

    opened by Hrant-Khachatrian 10
  • Incorrect BLEU score for English-Hindi MT System

    Incorrect BLEU score for English-Hindi MT System

    The BLEU score written in the Document is 89.35 which looks wrong to me. The referred paper mentions a BLEU score of 12.83 which itself is not state-of-the-art for the language pair.

    opened by kartikeypant 7
  • add G2P conversion task of schwa deletion to Hindi

    add G2P conversion task of schwa deletion to Hindi

    There's been a good body of previous work on schwa deletion in NLP/CL, you can see some of it in our paper. It'll be good to keep track of the SOTA on it since it's an important task for G2P conversion in North Indian languages.

    opened by aryamanarora 6
  • Added new task: data-to-text generation

    Added new task: data-to-text generation

    I have added a new task of Data-to-Text Natural Language Generation (D2T NLG). D2T NLG differs from other NLG tasks such as MT or QA in a way that the input to text generation system is a structured representation (table, knowledge graph, or JSON) instead of unstructured text. This document provides an overview of three most recent and popular datasets available publicly for D2T NLG. With the advancements in deep learning - several novel neural methods are being proposed that are capable of generating accurate, fluent and diverse texts.

    opened by ashishu007 6
  • Explain relation to paperswithcode.com

    Explain relation to paperswithcode.com

    Since the inception of this great repository of state-of-the-art results, alternatives such as paperswithcode.com have gained traction. This raises the question of the usefulness of keeping both resources up to date with the latest results. Could users and maintainers of this repository perhaps elaborate a bit, here and/or the README, how they see this resource relating to paperswithcode.com and particularly what nlpprogress.com does well that the former does not?

    opened by cwenner 6
  • add TCAN results to LM

    add TCAN results to LM

    To be honest, I'm a bit skeptical about their results and asked them some questions via email. So let's put a hold on this pull request for now (unless the maintainers think it's fine) and I will update it when they answered my questions.

    opened by Separius 6
  • Add missing LM SOTA result + # params + prev SOTA

    Add missing LM SOTA result + # params + prev SOTA

    Add missing LM ensemble which is SOTA for PTB. Add second-in-line LM SOTA for strict interpretation. Add number of params for LM results.

    (unsure why it lists commits that have already been merged)

    opened by cwenner 6
  • Data in YAML for structure and plots

    Data in YAML for structure and plots

    Related to #43.

    Right now did some demo for CCG. I didn't work on the plot form, just wanted to show it is possible and easy. Also - I think that data form can be standarized - so it would be simpler to add more complicated things (e.g. further comments, links to multiple implementations, etc).

    See files in:

    • _data - data in YAML format
    • _includes - for ways of converting data into its presentations (tables, charts, etc)
    • ccg_supertagging.md to see how to include these

    IMHO YAML is cleaner for writing and reading than markdown tables, so it is an advantage on its own. From my experience contributors (ones who use GitHub) have no slightest problem in using YAML (vide https://p.migdal.pl/interactive-machine-learning-list/).

    Right now I generate data through Liquid template.

    opened by stared 6
  • Pull request with new emotion detection dataset

    Pull request with new emotion detection dataset

    There seems to be some conflicts, therefore I am not resolving it as it might remove some code. So could you be kind to resolve them and merge my request?

    opened by KhondokerIslam 0
  • Update paraphrase-generation.md

    Update paraphrase-generation.md

    MULTIPIT, MULTIPITCROWD and MULTIPITEXPERT

    Past efforts on creating paraphrase corpora only consider one paraphrase criteria without taking into account the fact that the desired “strictness” of semantic equivalence in paraphrases varies from task to task (Bhagat and Hovy, 2013; Liu and Soh, 2022). For example, for the purpose of tracking unfolding events, “A tsunami hit Haiti.” and “303 people died because of the tsunami in Haiti” are sufficiently close to be considered as paraphrases; whereas for paraphrase generation, the extra information “303 people dead” in the latter sentence may lead models to learn to hallucinate and generate more unfaithful content. In this paper, the authors present an effective data collection and annotation method to address these issues.

    MULTIPIT is a topic Paraphrase in Twitter corpus that consists of a total of 130k sentence pairs with crowdsoursing (MULTIPITCROWD ) and expert (MULTIPITEXPERT ) annotations. MULTIPITCROWD is a large crowdsourced set of 125K sentence pairs that is useful for tracking information onTwitter. | Model | F1 | Paper / Source | Code | | ------------- | :-----:| --- | --- | | DeBERTaV3large | 92.00 |Improving Large-scale Paraphrase Acquisition and Generation| Unavailable|

    MULTIPITEXPERT is an expert annotated set of 5.5K sentence pairs using a stricter definition that is more suitable for acquiring paraphrases for generation purpose. | Model | F1 | Paper / Source | Code | | ------------- | :-----:| --- | --- | | DeBERTaV3large | 83.20 |Improving Large-scale Paraphrase Acquisition and Generation| Unavailable|

    opened by adrienpayong 0
  • add this to machine translation,. Is it okay?

    add this to machine translation,. Is it okay?

    opened by adrienpayong 0
Releases(v0.3)
Owner
Sebastian Ruder
Research Scientist @DeepMind
Sebastian Ruder
Code for EMNLP20 paper: "ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training"

ProphetNet-X This repo provides the code for reproducing the experiments in ProphetNet. In the paper, we propose a new pre-trained language model call

Microsoft 394 Dec 17, 2022
SpikeX - SpaCy Pipes for Knowledge Extraction

SpikeX is a collection of pipes ready to be plugged in a spaCy pipeline. It aims to help in building knowledge extraction tools with almost-zero effort.

Erre Quadro Srl 384 Dec 12, 2022
The ibet-Prime security token management system for ibet network.

ibet-Prime The ibet-Prime security token management system for ibet network. Features ibet-Prime is an API service that enables the issuance and manag

BOOSTRY 8 Dec 22, 2022
ACL'22: Structured Pruning Learns Compact and Accurate Models

☕ CoFiPruning: Structured Pruning Learns Compact and Accurate Models This repository contains the code and pruned models for our ACL'22 paper Structur

Princeton Natural Language Processing 130 Jan 04, 2023
Deal or No Deal? End-to-End Learning for Negotiation Dialogues

Introduction This is a PyTorch implementation of the following research papers: (1) Hierarchical Text Generation and Planning for Strategic Dialogue (

Facebook Research 1.4k Dec 29, 2022
Persian-lexicon - A lexicon of 70K unique Persian (Farsi) words

Persian Lexicon This repo uses Uppsala Persian Corpus (UPC) to construct a lexic

Saman Vaisipour 7 Apr 01, 2022
运小筹公众号是致力于分享运筹优化(LP、MIP、NLP、随机规划、鲁棒优化)、凸优化、强化学习等研究领域的内容以及涉及到的算法的代码实现。

OlittleRer 运小筹公众号是致力于分享运筹优化(LP、MIP、NLP、随机规划、鲁棒优化)、凸优化、强化学习等研究领域的内容以及涉及到的算法的代码实现。编程语言和工具包括Java、Python、Matlab、CPLEX、Gurobi、SCIP 等。 关注我们: 运筹小公众号 有问题可以直接在

运小筹 151 Dec 30, 2022
A list of NLP(Natural Language Processing) tutorials

NLP Tutorial A list of NLP(Natural Language Processing) tutorials built on PyTorch. Table of Contents A step-by-step tutorial on how to implement and

Allen Lee 1.3k Dec 25, 2022
An Open-Source Package for Neural Relation Extraction (NRE)

OpenNRE We have a DEMO website (http://opennre.thunlp.ai/). Try it out! OpenNRE is an open-source and extensible toolkit that provides a unified frame

THUNLP 3.9k Jan 03, 2023
Chinese NER with albert/electra or other bert descendable model (keras)

Chinese NLP (albert/electra with Keras) Named Entity Recognization Project Structure ./ ├── NER │   ├── __init__.py │   ├── log

2 Nov 20, 2022
In this repository we have tested 3 VQA models on the ImageCLEF-2019 dataset.

Med-VQA In this repository we have tested 3 VQA models on the ImageCLEF-2019 dataset. Two of these are made on top of Facebook AI Reasearch's Multi-Mo

Kshitij Ambilduke 8 Apr 14, 2022
PyTranslator é simultaneamente um editor e tradutor de texto com diversos recursos e interface feito com coração e 100% em Python

PyTranslator O Que é e para que serve o PyTranslator? PyTranslator é simultaneamente um editor e tradutor de texto em com interface gráfica que usa a

Elizeu Barbosa Abreu 1 May 12, 2022
All the code I wrote for Overwatch-related projects that I still own the rights to.

overwatch_shit.zip This is (eventually) going to contain all the software I wrote during my five-year imprisonment stay playing Overwatch. I'll be add

zkxjzmswkwl 2 Dec 31, 2021
STonKGs is a Sophisticated Transformer that can be jointly trained on biomedical text and knowledge graphs

STonKGs STonKGs is a Sophisticated Transformer that can be jointly trained on biomedical text and knowledge graphs. This multimodal Transformer combin

STonKGs 27 Aug 11, 2022
A fast, efficient universal vector embedding utility package.

Magnitude: a fast, simple vector embedding utility library A feature-packed Python package and vector storage file format for utilizing vector embeddi

Plasticity 1.5k Jan 02, 2023
Samantha, A covid-19 information bot which will provide basic information about this pandemic in form of conversation.

Covid-19-BOT Samantha, A covid-19 information bot which will provide basic information about this pandemic in form of conversation. This bot uses torc

Neeraj Majhi 2 Nov 05, 2021
edge-SR: Super-Resolution For The Masses

edge-SR: Super Resolution For The Masses Citation Pablo Navarrete Michelini, Yunhua Lu and Xingqun Jiang. "edge-SR: Super-Resolution For The Masses",

Pablo 40 Nov 10, 2022
An assignment on creating a minimalist neural network toolkit for CS11-747

minnn by Graham Neubig, Zhisong Zhang, and Divyansh Kaushik This is an exercise in developing a minimalist neural network toolkit for NLP, part of Car

Graham Neubig 63 Dec 29, 2022
🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy

spacy-transformers: Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy This package provides spaCy components and architectures to use tr

Explosion 1.2k Jan 08, 2023
Meta learning algorithms to train cross-lingual NLI (multi-task) models

Meta learning algorithms to train cross-lingual NLI (multi-task) models

M.Hassan Mojab 4 Nov 20, 2022