BERT score for text generation

Overview

BERTScore

made-with-python arxiv PyPI version bert-score Downloads Downloads License: MIT Code style: black

Automatic Evaluation Metric described in the paper BERTScore: Evaluating Text Generation with BERT (ICLR 2020).

News:

  • Features to appear in the next version (currently in the master branch):

    • Support 3 ByT5 models
  • Updated to version 0.3.10

    • Support 8 SimCSE models
    • Fix the support of scibert (to be compatible with transformers >= 4.0.0)
    • Add scripts for reproducing some results in our paper (See this folder)
    • Support fast tokenizers in huggingface transformers with --use_fast_tokenizer. Notably, you will get different scores because of the difference in the tokenizer implementations (#106).
    • Fix non-zero recall problem for empty candidate strings (#107).
    • Add Turkish BERT Supoort (#108).
  • Updated to version 0.3.9

    • Support 3 BigBird models
    • Fix bugs for mBART and T5
    • Support 4 mT5 models as requested (#93)
  • Updated to version 0.3.8

    • Support 53 new pretrained models including BART, mBART, BORT, DeBERTa, T5, BERTweet, MPNet, ConvBERT, SqueezeBERT, SpanBERT, PEGASUS, Longformer, LED, Blendbot, etc. Among them, DeBERTa achives higher correlation with human scores than RoBERTa (our default) on WMT16 dataset. The correlations are presented in this Google sheet.
    • Please consider using --model_type microsoft/deberta-xlarge-mnli or --model_type microsoft/deberta-large-mnli (faster) if you want the scores to correlate better with human scores.
    • Add baseline files for DeBERTa models.
    • Add example code to generate baseline files (please see the details).
  • Updated to version 0.3.7

    • Being compatible with Huggingface's transformers version >=4.0.0. Thanks to public contributers (#84, #85, #86).
  • See #22 if you want to replicate our experiments on the COCO Captioning dataset.

  • For people in China, downloading pre-trained weights can be very slow. We provide copies of a few models on Baidu Pan.

  • Huggingface's datasets library includes BERTScore in their metric collection.

Previous updates

  • Updated to version 0.3.6
    • Support custom baseline files #74
    • The option --rescale-with-baseline is changed to --rescale_with_baseline so that it is consistent with other options.
  • Updated to version 0.3.5
    • Being compatible with Huggingface's transformers >=v3.0.0 and minor fixes (#58, #66, #68)
    • Several improvements related to efficency (#67, #69)
  • Updated to version 0.3.4
    • Compatible with transformers v2.11.0 now (#58)
  • Updated to version 0.3.3
    • Fixing the bug with empty strings issue #47.
    • Supporting 6 ELECTRA models and 24 smaller BERT models.
    • A new Google sheet for keeping the performance (i.e., pearson correlation with human judgment) of different models on WMT16 to-English.
    • Including the script for tuning the best number of layers of an English pre-trained model on WMT16 to-English data (See the details).
  • Updated to version 0.3.2
    • Bug fixed: fixing the bug in v0.3.1 when having multiple reference sentences.
    • Supporting multiple reference sentences with our command line tool.
  • Updated to version 0.3.1
    • A new BERTScorer object that caches the model to avoid re-loading it multiple times. Please see our jupyter notebook example for the usage.
    • Supporting multiple reference sentences for each example. The score function now can take a list of lists of strings as the references and return the score between the candidate sentence and its closest reference sentence.

Please see release logs for older updates.

Authors:

*: Equal Contribution

Overview

BERTScore leverages the pre-trained contextual embeddings from BERT and matches words in candidate and reference sentences by cosine similarity. It has been shown to correlate with human judgment on sentence-level and system-level evaluation. Moreover, BERTScore computes precision, recall, and F1 measure, which can be useful for evaluating different language generation tasks.

For an illustration, BERTScore recall can be computed as

If you find this repo useful, please cite:

@inproceedings{bert-score,
  title={BERTScore: Evaluating Text Generation with BERT},
  author={Tianyi Zhang* and Varsha Kishore* and Felix Wu* and Kilian Q. Weinberger and Yoav Artzi},
  booktitle={International Conference on Learning Representations},
  year={2020},
  url={https://openreview.net/forum?id=SkeHuCVFDr}
}

Installation

  • Python version >= 3.6
  • PyTorch version >= 1.0.0

Install from pypi with pip by

pip install bert-score

Install latest unstable version from the master branch on Github by:

pip install git+https://github.com/Tiiiger/bert_score

Install it from the source by:

git clone https://github.com/Tiiiger/bert_score
cd bert_score
pip install .

and you may test your installation by:

python -m unittest discover

Usage

Python Function

On a high level, we provide a python function bert_score.score and a python object bert_score.BERTScorer. The function provides all the supported features while the scorer object caches the BERT model to faciliate multiple evaluations. Check our demo to see how to use these two interfaces. Please refer to bert_score/score.py for implementation details.

Running BERTScore can be computationally intensive (because it uses BERT :p). Therefore, a GPU is usually necessary. If you don't have access to a GPU, you can try our demo on Google Colab

Command Line Interface (CLI)

We provide a command line interface (CLI) of BERTScore as well as a python module. For the CLI, you can use it as follows:

  1. To evaluate English text files:

We provide example inputs under ./example.

bert-score -r example/refs.txt -c example/hyps.txt --lang en

You will get the following output at the end:

roberta-large_L17_no-idf_version=0.3.0(hug_trans=2.3.0) P: 0.957378 R: 0.961325 F1: 0.959333

where "roberta-large_L17_no-idf_version=0.3.0(hug_trans=2.3.0)" is the hash code.

Starting from version 0.3.0, we support rescaling the scores with baseline scores

bert-score -r example/refs.txt -c example/hyps.txt --lang en --rescale_with_baseline

You will get:

roberta-large_L17_no-idf_version=0.3.0(hug_trans=2.3.0)-rescaled P: 0.747044 R: 0.770484 F1: 0.759045

This makes the range of the scores larger and more human-readable. Please see this post for details.

When having multiple reference sentences, please use

bert-score -r example/refs.txt example/refs2.txt -c example/hyps.txt --lang en

where the -r argument supports an arbitrary number of reference files. Each reference file should have the same number of lines as your candidate/hypothesis file. The i-th line in each reference file corresponds to the i-th line in the candidate file.

  1. To evaluate text files in other languages:

We currently support the 104 languages in multilingual BERT (full list).

Please specify the two-letter abbreviation of the language. For instance, using --lang zh for Chinese text.

See more options by bert-score -h.

  1. To load your own custom model: Please specify the path to the model and the number of layers to use by --model and --num_layers.
bert-score -r example/refs.txt -c example/hyps.txt --model path_to_my_bert --num_layers 9
  1. To visualize matching scores:
bert-score-show --lang en -r "There are two bananas on the table." -c "On the table are two apples." -f out.png

The figure will be saved to out.png.

Practical Tips

  • Report the hash code (e.g., roberta-large_L17_no-idf_version=0.3.0(hug_trans=2.3.0)-rescaled) in your paper so that people know what setting you use. This is inspired by sacreBLEU. Changes in huggingface's transformers version may also affect the score (See issue #46).
  • Unlike BERT, RoBERTa uses GPT2-style tokenizer which creates addition " " tokens when there are multiple spaces appearing together. It is recommended to remove addition spaces by sent = re.sub(r' +', ' ', sent) or sent = re.sub(r'\s+', ' ', sent).
  • Using inverse document frequency (idf) on the reference sentences to weigh word importance may correlate better with human judgment. However, when the set of reference sentences become too small, the idf score would become inaccurate/invalid. We now make it optional. To use idf, please set --idf when using the CLI tool or idf=True when calling bert_score.score function.
  • When you are low on GPU memory, consider setting batch_size when calling bert_score.score function.
  • To use a particular model please set -m MODEL_TYPE when using the CLI tool or model_type=MODEL_TYPE when calling bert_score.score function.
  • We tune layer to use based on WMT16 metric evaluation dataset. You may use a different layer by setting -l LAYER or num_layers=LAYER. To tune the best layer for your custom model, please follow the instructions in tune_layers folder.
  • Limitation: Because BERT, RoBERTa, and XLM with learned positional embeddings are pre-trained on sentences with max length 512, BERTScore is undefined between sentences longer than 510 (512 after adding [CLS] and [SEP] tokens). The sentences longer than this will be truncated. Please consider using XLNet which can support much longer inputs.

Default Behavior

Default Model

Language Model
en roberta-large
en-sci allenai/scibert_scivocab_uncased
zh bert-base-chinese
tr dbmdz/bert-base-turkish-cased
others bert-base-multilingual-cased

Default Layers

Please see this Google sheet for the supported models and their performance.

Acknowledgement

This repo wouldn't be possible without the awesome bert, fairseq, and transformers.

Comments
  • AssertionError: Different number of candidates and references

    AssertionError: Different number of candidates and references

    Hi,

    I am trying to compute semantic similarity using BertScore between system and reference summary having different number of sentences and getting this error. AssertionError: Different number of candidates and references

    I understand that the developer has asserted the condition.

    Is there a way out?

    Thanks Alka

    opened by alkakhurana 13
  • rescale and specify certain model

    rescale and specify certain model

    Hi Thank you for making your code available. I have used your score before the last update (before muti-refs were possible and before scorer). I used to get the hash of the model to make sure I get the same results always. With the new update, I'm struggling to find how to set a specific model and also rescale.

    For example, would like to do like this out, hash_code= score(preds, golds, model_type="roberta-large_L17_no-idf_version=0.3.0(hug_trans=2.5.0)", rescale_with_baseline= True, return_hash=True)

    roberta-large_L17_no-idf_version=0.3.0(hug_trans=2.5.0) is the hash I got from my earlier runs couple of months ago.

    Appreciate your help Areej

    opened by areejokaili 11
  • Evaluation Corpora

    Evaluation Corpora

    Thank you for your great work!

    Can you guys share the Machine Translation and Image Captioning corpora that contain references, candidates and human scores along with the BLEU and Bert Score?

    Thank you!

    opened by gmihaila 10
  • [Question] Cross-lingual Score

    [Question] Cross-lingual Score

    Assumed that the embeddings have learned joint languages representations (so that cat is closer to katze or chat, hence a sentence like I love eating will be closer to Ich esse gerne, as it happens in the MUSE or LASER models), would it be possible to evaluate the BERTscore against sentences in two different languages?

    opened by loretoparisi 10
  • Running slowly

    Running slowly

    Hi, Thank you for your nice work and release code, I have tried to run the code, but it is very slow, may be i should change some setting? Could you give me some advice ? Thank you very much~

    opened by HqWu-HITCS 10
  • request for the data in Image Captioning experiment

    request for the data in Image Captioning experiment

    Hi, would you mind release the data used in Image Captioning experiment (human judgments of twelve submission entries from the COCO 2015 Captioning Challenge) in your paper? Thanks a lot!

    opened by ChrisRBXiong 9
  • Version update changes the scores

    Version update changes the scores

    Hi there,

    Going from v0.3.4 (transformers=2.11) to v0.3.6 (transformers=3.3.1) shifts the scores abruptly for a given system:

    roberta-large_L17_no-idf_version=0.3.6(hug_trans=3.3.1)-rescaled P: 0.294713 R: 0.474827 F1: 0.376573
    roberta-large_L17_no-idf_version=0.3.4(hug_trans=2.11.0)-rescaled P: 0.459102 R: 0.514197 F1: 0.479780
    
    opened by ozancaglayan 8
  • The download speed is very slow in China

    The download speed is very slow in China

    After I execute bert_score to run examples, it begins to download files, and the first file is downloaded fast, but the second file of size 1.43G, which I suppose is the BERT model file, is very verty slow, and my base is China. Is there any method to work around this problem? for example can I download BERT models elsewhere first and then use bert_score to run them?

    opened by chncwang 8
  • Create my own Model for Sentence Similarity/Automated Scoring

    Create my own Model for Sentence Similarity/Automated Scoring

    I have a wikipedia dump file as my corpus (which is in Indonesian, i've extract it and convert it to .txt) How can i train my corpus file with bert multilingual cased (fine tune) with BERTScore so i can have my own model for specific task such as sentence similarity or automated short answer scoring?

    Or maybe i should do this with the original BERT? Thankyou so much in advance.

    opened by dhimasyoga16 7
  • Use fine tuned model

    Use fine tuned model

    I fine-tuned the bert model on my pc, then used the bert-score -r comments.txt -c all_tags.txt --model my_model --num_layers 9 code to use the fine-tuned model on bert score, but this error happened.

    image

    opened by seyyedjavadrazavi 6
  • Slow tokenizer is used by default

    Slow tokenizer is used by default

    First of all, thanks for a useful metric along with the code!

    I've used to evaluate a big set of internal documents and it takes a while. Upon a review of your code, I've noticed that by default slow tokenizers (python version) is used (use_fast=False). It would be great either to switch the default to fast tokenizers (written in RUST) or at least parametrize it to avoid hacking lib code to make it work for a massive set of documents (currently it's a bottleneck, especially on GPU-powered machines).

    Thanks!

    opened by nikitajz 6
  • get_wmt18_seg_result question

    get_wmt18_seg_result question

    您好,我在运行get_wmt18_seg_result.py文件时,得到的如下结果,这和您在论文中报告的absolute pearson correlation 并不相同。 您可以解释下get_wmt18_seg_result.py文件得到的结果吗? 我应该怎么样得到论文中报告的absolute pearson correlation结果呢?

    model_type cs-en de-en et-en fi-en ru-en tr-en zh-en avg roberta-large P 0.3831702544031311 0.54582257007364 0.39514465541862803 0.2939672801635992 0.35140330642060746 0.29337243401759533 0.2487933567167311 0.35881055103056175 roberta-large R 0.40626223091976515 0.5499350991505058 0.39761287706493187 0.3182515337423313 0.35851595540176856 0.29665689149560115 0.25802680097131037 0.3693230555351735 roberta-large F 0.4140900195694716 0.5548187274292837 0.4033603074698965 0.30610940695296524 0.354479046520569 0.3020527859237537 0.26480199058668347 0.3713874692075177 allenai/scibert_scivocab_uncased P 0.3232876712328767 0.506162367788616 0.3475784982634298 0.21945296523517382 0.3081507112648981 0.22205278592375366 0.2200737476391762 0.3066798210497034 allenai/scibert_scivocab_uncased R 0.3557729941291585 0.5114572489750806 0.35551206784083494 0.25140593047034765 0.31103421760861205 0.23096774193548386 0.23560272206733218 0.32167898900383574 allenai/scibert_scivocab_uncased F 0.34794520547945207 0.5180887021115267 0.3570282611378502 0.24821063394683027 0.31391772395232603 0.23026392961876832 0.23302455256767696 0.3212112869734901 bert-base-chinese P 0.225440313111546 0.4306460526146689 0.30604185398705946 0.17356850715746422 0.23317954632833526 0.19390029325513197 0.18409928950445184 0.24955369370837963 bert-base-chinese R 0.2802348336594912 0.4495379830615209 0.32434195447894076 0.21434049079754602 0.2808535178777393 0.20351906158357772 0.1951314566657673 0.27827989973208334 bert-base-chinese F 0.25909980430528373 0.45072033517111976 0.3256465859205585 0.20718302658486706 0.26855055747789314 0.21008797653958944 0.19722996672362622 0.27407403610327685 dbmdz/bert-base-turkish-cased P 0.2939334637964775 0.4726966624256211 0.3287847534422877 0.18852249488752557 0.2750865051903114 0.2133724340175953 0.20964115478010611 0.2831482097914178 dbmdz/bert-base-turkish-cased R 0.32289628180039137 0.4839290074668106 0.3352726503411435 0.2312116564417178 0.2877739331026528 0.2129032258064516 0.21731570584884732 0.298757494401145 dbmdz/bert-base-turkish-cased F 0.31859099804305285 0.4864993381398517 0.3396801889952575 0.22124233128834356 0.29257977700884275 0.22041055718475072 0.21977396048805348 0.2998253073068789 bert-base-multilingual-cased P 0.338160469667319 0.5064708074693809 0.3588265369087287 0.22955010224948874 0.3075740099961553 0.23049853372434018 0.22888748988218366 0.31428113569965666 bert-base-multilingual-cased R 0.3573385518590998 0.5134107002865919 0.36771213483542253 0.25639059304703476 0.32410611303344866 0.2300293255131965 0.2407590610666427 0.3271066399487767 bert-base-multilingual-cased F 0.3585127201565558 0.5162380640269371 0.36690114772306553 0.25140593047034765 0.31718569780853517 0.24269794721407625 0.24279761369427708 0.3279627315848278 roberta-large P 0.3831702544031311 0.54582257007364 0.39514465541862803 0.2939672801635992 0.35140330642060746 0.29337243401759533 0.2487933567167311 0.35881055103056175 roberta-large R 0.40626223091976515 0.5499350991505058 0.39761287706493187 0.3182515337423313 0.35851595540176856 0.29665689149560115 0.25802680097131037 0.3693230555351735 roberta-large F 0.4140900195694716 0.5548187274292837 0.4033603074698965 0.30610940695296524 0.354479046520569 0.3020527859237537 0.26480199058668347 0.3713874692075177 allenai/scibert_scivocab_uncased P 0.3232876712328767 0.506162367788616 0.3475784982634298 0.21945296523517382 0.3081507112648981 0.22205278592375366 0.2200737476391762 0.3066798210497034 allenai/scibert_scivocab_uncased R 0.3557729941291585 0.5114572489750806 0.35551206784083494 0.25140593047034765 0.31103421760861205 0.23096774193548386 0.23560272206733218 0.32167898900383574 allenai/scibert_scivocab_uncased F 0.34794520547945207 0.5180887021115267 0.3570282611378502 0.24821063394683027 0.31391772395232603 0.23026392961876832 0.23302455256767696 0.3212112869734901 bert-base-chinese P 0.225440313111546 0.4306460526146689 0.30604185398705946 0.17356850715746422 0.23317954632833526 0.19390029325513197 0.18409928950445184 0.24955369370837963 bert-base-chinese R 0.2802348336594912 0.4495379830615209 0.32434195447894076 0.21434049079754602 0.2808535178777393 0.20351906158357772 0.1951314566657673 0.27827989973208334 bert-base-chinese F 0.25909980430528373 0.45072033517111976 0.3256465859205585 0.20718302658486706 0.26855055747789314 0.21008797653958944 0.19722996672362622 0.27407403610327685 dbmdz/bert-base-turkish-cased P 0.2939334637964775 0.4726966624256211 0.3287847534422877 0.18852249488752557 0.2750865051903114 0.2133724340175953 0.20964115478010611 0.2831482097914178 dbmdz/bert-base-turkish-cased R 0.32289628180039137 0.4839290074668106 0.3352726503411435 0.2312116564417178 0.2877739331026528 0.2129032258064516 0.21731570584884732 0.298757494401145 dbmdz/bert-base-turkish-cased F 0.31859099804305285 0.4864993381398517 0.3396801889952575 0.22124233128834356 0.29257977700884275 0.22041055718475072 0.21977396048805348 0.2998253073068789 bert-base-multilingual-cased P 0.338160469667319 0.5064708074693809 0.3588265369087287 0.22955010224948874 0.3075740099961553 0.23049853372434018 0.22888748988218366 0.31428113569965666 bert-base-multilingual-cased R 0.3573385518590998 0.5134107002865919 0.36771213483542253 0.25639059304703476 0.32410611303344866 0.2300293255131965 0.2407590610666427 0.3271066399487767 bert-base-multilingual-cased F 0.3585127201565558 0.5162380640269371 0.36690114772306553 0.25140593047034765 0.31718569780853517 0.24269794721407625 0.24279761369427708 0.3279627315848278

    捕获

    opened by Hou-jing 3
Releases(v0.3.12)
  • v0.3.12(Oct 14, 2022)

  • v0.3.11(Dec 10, 2021)

  • v0.3.10(Aug 5, 2021)

    • Updated to version 0.3.10
      • Support 8 SimCSE models
      • Fix the support of scibert (to be compatible with transformers >= 4.0.0)
      • Add scripts for reproducing some results in our paper (See this folder)
      • Support fast tokenizers in huggingface transformers with --use_fast_tokenizer. Notably, you will get different scores because of the difference in the tokenizer implementations (#106).
      • Fix non-zero recall problem for empty candidate strings (#107).
      • Add Turkish BERT Supoort (#108).
    Source code(tar.gz)
    Source code(zip)
  • v0.3.9(Apr 17, 2021)

  • v0.3.8(Mar 3, 2021)

    • Support 53 new pretrained models including BART, mBART, BORT, DeBERTa, T5, mT5, BERTweet, MPNet, ConvBERT, SqueezeBERT, SpanBERT, PEGASUS, Longformer, LED, Blendbot, etc. Among them, DeBERTa achives higher correlation with human scores than RoBERTa (our default) on WMT16 dataset. The correlations are presented in this Google sheet.
    • Please consider using --model_type microsoft/deberta-xlarge-mnli or --model_type microsoft/deberta-large-mnli (faster) if you want the scores to correlate better with human scores.
    • Add baseline files for DeBERTa models.
    • Add example code to generate baseline files (please see the details).
    Source code(tar.gz)
    Source code(zip)
  • v0.3.7(Dec 6, 2020)

  • v0.3.6(Sep 3, 2020)

    Updated to version 0.3.6

    • Support custom baseline files #74
    • The option --rescale-with-baseline is changed to --rescale_with_baseline so that it is consistent with other options.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.5(Jul 17, 2020)

  • v0.3.4(Jun 10, 2020)

  • v0.3.3(May 10, 2020)

    • Fixing the bug with empty strings issue #47.
    • Supporting 6 ELECTRA models and 24 smaller BERT models.
    • A new Google sheet for keeping the performance (i.e., pearson correlation with human judgment) of different models on WMT16 to-English.
    • Including the script for tuning the best number of layers of an English pre-trained model on WMT16 to-English data (See the details).
    Source code(tar.gz)
    Source code(zip)
  • v0.3.2(Apr 18, 2020)

    • Bug fixed: fixing the bug in v0.3.1 when having multiple reference sentences.
    • Supporting multiple reference sentences with our command-line tool
    Source code(tar.gz)
    Source code(zip)
  • v0.3.1(Apr 18, 2020)

    • A new BERTScorer object that caches the model to avoid re-loading it multiple times. Please see our jupyter notebook example for the usage.
    • Supporting multiple reference sentences for each example. The score function now can take a list of lists of strings as the references and return the score between the candidate sentence and its closest reference sentence.
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Apr 18, 2020)

    • Supporting Baseline Rescaling: we apply a simple linear transformation to enhance the readability of BERTscore using pre-computed "baselines". It has been pointed out (e.g. by #20, #23) that the numerical range of BERTScore is exceedingly small when computed with RoBERTa models. In other words, although BERTScore correctly distinguishes examples through ranking, the numerical scores of good and bad examples are very similar. We detail our approach in a separate post.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.3(Apr 18, 2020)

    • Supporting DistilBERT (Sanh et al.), ALBERT (Lan et al.), and XLM-R (Conneau et al.) models.
    • Including the version of huggingface's transformers in the hash code for reproducibility
    Source code(tar.gz)
    Source code(zip)
  • v0.2.2(Apr 18, 2020)

    • Bug fixed: when using RoBERTaTokenizer, we now set add_prefix_space=True which was the default setting in huggingface's pytorch_transformers (when we ran the experiments in the paper) before they migrated it to transformers. This breaking change in transformers leads to a lower correlation with human evaluation. To reproduce our RoBERTa results in the paper, please use version 0.2.2.
    • The best number of layers for DistilRoBERTa is included
    • Supporting loading a custom model
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Apr 18, 2020)

  • v0.2.0(Apr 18, 2020)

    • Supporting BERT, XLM, XLNet, and RoBERTa models using huggingface's Transformers library
    • Automatically picking the best model for a given language
    • Automatically picking the layer based on a model
    • IDF is not set as default as we show in the new version that the improvement brought by importance weighting is not consistent
    Source code(tar.gz)
    Source code(zip)
Owner
Tianyi
Graduate student at Stanford University.
Tianyi
VMD Audio/Text control with natural language

This repository is a proof of principle for performing Molecular Dynamics analysis, in this case with the program VMD, via natural language commands.

Andrew White 13 Jun 09, 2022
Conditional probing: measuring usable information beyond a baseline

Conditional probing: measuring usable information beyond a baseline

John Hewitt 20 Dec 15, 2022
Py65 65816 - Add support for the 65C816 to py65

Add support for the 65C816 to py65 Py65 (https://github.com/mnaberez/py65) is a

4 Jan 04, 2023
Chinese Pre-Trained Language Models (CPM-LM) Version-I

CPM-Generate 为了促进中文自然语言处理研究的发展,本项目提供了 CPM-LM (2.6B) 模型的文本生成代码,可用于文本生成的本地测试,并以此为基础进一步研究零次学习/少次学习等场景。[项目首页] [模型下载] [技术报告] 若您想使用CPM-1进行推理,我们建议使用高效推理工具BMI

Tsinghua AI 1.4k Jan 03, 2023
Quantifiers and Negations in RE Documents

Quantifiers-and-Negations-in-RE-Documents This project was part of my work for a

Nicolas Ruscher 1 Feb 01, 2022
Generating new names based on trends in data using GPT2 (Transformer network)

MLOpsNameGenerator Overall Goal The goal of the project is to develop a model that is capable of creating Pokémon names based on its description, usin

Gustav Lang Moesmand 2 Jan 10, 2022
STonKGs is a Sophisticated Transformer that can be jointly trained on biomedical text and knowledge graphs

STonKGs STonKGs is a Sophisticated Transformer that can be jointly trained on biomedical text and knowledge graphs. This multimodal Transformer combin

STonKGs 27 Aug 11, 2022
A repo for materials relating to the tutorial of CS-332 NLP

CS-332-NLP A repo for materials relating to the tutorial of CS-332 NLP Contents Tutorial 1: Introduction Corpus Regular expression Tokenization Tutori

Alok singh 9 Feb 15, 2022
LSTM model - IMDB review sentiment analysis

NLP - Movie review sentiment analysis The colab notebook contains the code for building a LSTM Recurrent Neural Network that gives 87-88% accuracy on

Sundeep Bhimireddy 1 Jan 29, 2022
An extensive UI tool built using new data scraped from BBC News

BBC-News-Analyzer An extensive UI tool built using new data scraped from BBC New

Antoreep Jana 1 Dec 31, 2021
Ecco is a python library for exploring and explaining Natural Language Processing models using interactive visualizations.

Visualize, analyze, and explore NLP language models. Ecco creates interactive visualizations directly in Jupyter notebooks explaining the behavior of Transformer-based language models (like GPT2, BER

Jay Alammar 1.6k Dec 25, 2022
Twewy-discord-chatbot - Build a Discord AI Chatbot that Speaks like Your Favorite Character

Build a Discord AI Chatbot that Speaks like Your Favorite Character! This is a Discord AI Chatbot that uses the Microsoft DialoGPT conversational mode

Lynn Zheng 231 Dec 30, 2022
💥 Fast State-of-the-Art Tokenizers optimized for Research and Production

Provides an implementation of today's most used tokenizers, with a focus on performance and versatility. Main features: Train new vocabularies and tok

Hugging Face 6.2k Dec 31, 2022
The official implementation of "BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Identify Analogies?, ACL 2021 main conference"

BERT is to NLP what AlexNet is to CV This is the official implementation of BERT is to NLP what AlexNet is to CV: Can Pre-Trained Language Models Iden

Asahi Ushio 20 Nov 03, 2022
DLO8012: Natural Language Processing & CSL804: Computational Lab - II

NATURAL-LANGUAGE-PROCESSING-AND-COMPUTATIONAL-LAB-II DLO8012: NLP & CSL804: CL-II [SEMESTER VIII] Syllabus NLP - Reference Books THE WALL MEGA SATISH

AMEY THAKUR 7 Apr 28, 2022
Learning to Rewrite for Non-Autoregressive Neural Machine Translation

RewriteNAT This repo provides the code for reproducing our proposed RewriteNAT in EMNLP 2021 paper entitled "Learning to Rewrite for Non-Autoregressiv

Xinwei Geng 20 Dec 25, 2022
Code for CodeT5: a new code-aware pre-trained encoder-decoder model.

CodeT5: Identifier-aware Unified Pre-trained Encoder-Decoder Models for Code Understanding and Generation This is the official PyTorch implementation

Salesforce 564 Jan 08, 2023
The official repository of the ISBI 2022 KNIGHT Challenge

KNIGHT The official repository holding the data for the ISBI 2022 KNIGHT Challenge About The KNIGHT Challenge asks teams to develop models to classify

Nicholas Heller 4 Jan 22, 2022
A large-scale (194k), Multiple-Choice Question Answering (MCQA) dataset designed to address realworld medical entrance exam questions.

MedMCQA MedMCQA : A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering A large-scale, Multiple-Choice Question Answe

MedMCQA 24 Nov 30, 2022
Connectionist Temporal Classification (CTC) decoding algorithms: best path, beam search, lexicon search, prefix search, and token passing. Implemented in Python.

CTC Decoding Algorithms Update 2021: installable Python package Python implementation of some common Connectionist Temporal Classification (CTC) decod

Harald Scheidl 736 Jan 03, 2023