A fast and lightweight python-based CTC beam search decoder for speech recognition.

Overview

pyctcdecode

A fast and feature-rich CTC beam search decoder for speech recognition written in Python, providing n-gram (kenlm) language model support similar to PaddlePaddle's decoder, but incorporating many new features such as byte pair encoding and real-time decoding to support models like Nvidia's Conformer-CTC or Facebook's Wav2Vec2.

pip install pyctcdecode

Main Features:

  • 🔥  hotword boosting
  • 🤖  handling of BPE vocabulary
  • 👥  multi-LM support for 2+ models
  • 🕒  stateful LM for real-time decoding
  •  native frame index annotation of words
  • 💨  fast runtime, comparable to C++ implementation
  • 🐍  easy-to-modify Python code

Quick Start:

import kenlm
from pyctcdecode import build_ctcdecoder

# load trained kenlm model
kenlm_model = kenlm.Model("/my/dir/kenlm_model.binary")

# specify alphabet labels as they appear in logits
labels = [
    " ", "a", "b", "c", "d", "e", "f", "g", "h", "i", "j", "k", "l",
    "m", "n", "o", "p", "q", "r", "s", "t", "u", "v", "w", "x", "y", "z",
]

# prepare decoder and decode logits via shallow fusion
decoder = build_ctcdecoder(
    labels,
    kenlm_model,
    alpha=0.5,  # tuned on a val set
    beta=1.0,  # tuned on a val set
)
text = decoder.decode(logits)

If the vocabulary is BPE-based, adjust the labels and set the is_bpe flag (merging of tokens for the LM is handled automatically):

labels = ["<unk>", "▁bug", "s", "▁bunny"]

decoder = build_ctcdecoder(
    labels,
    kenlm_model,
    is_bpe=True,
)
text = decoder.decode(logits)

Improve domain specificity by adding important contextual words ("hotwords") during inference:

hotwords = ["looney tunes", "anthropomorphic"]
text = decoder.decode(
    logits,
    hotwords=hotwords,
    hotword_weight=10.0,
)

Batch support via multiprocessing:

from multiprocessing import Pool

with Pool() as pool:
    text_list = decoder.decode_batch(logits_list, pool)

Use pyctcdecode for a pretrained Conformer-CTC model:

import nemo.collections.asr as nemo_asr

asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained(
  model_name='stt_en_conformer_ctc_small'
)
logits = asr_model.transcribe(["my_file.wav"], logprobs=True)[0].cpu().detach().numpy()

decoder = build_ctcdecoder(asr_model.decoder.vocabulary, is_bpe=True)
decoder.decode(logits)

The tutorials folder contains many well documented notebook examples on how to run speech recognition using pretrained models from Nvidia's NeMo and Huggingface/Facebook's Wav2Vec2.

For more details on how to use all of pyctcdecode's features, have a look at our main tutorial.

Why pyctcdecode?

In scientific computing, there’s often a tension between a language’s performance and its ease of use for prototyping and experimentation. Although C++ is the conventional choice for CTC decoders, we decided to try building one in Python. This choice allowed us to easily implement experimental features, while keeping runtime competitive through optimizations like caching and beam pruning. We compare the performance of pyctcdecode to an industry standard C++ decoder at various beam widths (shown as inline annotations), allowing us to visualize the trade-off of word error rate (y-axis) vs runtime (x-axis). For beam widths of 10 or greater, pyctcdecode yields strictly superior performance, with lower error rates in less time, see code here.

The use of Python allows us to easily implement features like hotword support with only a few lines of code.

pyctcdecode can return either a single transcript, or the full results of the beam search algorithm. The latter provides the language model state to enable real-time inference as well as word-based logit indices (frames) to enable word-based timing and confidence score calculations natively through the decoding process.

Additional features such as BPE vocabulary, as well as examples of pyctcdecode as part of a full speech recognition pipeline, can be found in the tutorials section.

Resources:

License:

Licensed under the Apache 2.0 License. Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Copyright 2021-present Kensho Technologies, LLC. The present date is determined by the timestamp of the most recent commit in the repository.

Comments
  • Getting key error form the pyctcdecode package, any idea ?

    Getting key error form the pyctcdecode package, any idea ?

    Traceback (most recent call last):
      File "/usr/lib/python3.8/multiprocessing/pool.py", line 125, in worker
        result = (True, func(*args, **kwds))
      File "/usr/lib/python3.8/multiprocessing/pool.py", line 48, in mapstar
        return list(map(*args))
      File "/usr/local/lib/python3.8/dist-packages/pyctcdecode/decoder.py", line 547, in _decode_beams_mp_safe
        decoded_beams = self.decode_beams(
      File "/usr/local/lib/python3.8/dist-packages/pyctcdecode/decoder.py", line 525, in decode_beams
        decoded_beams = self._decode_logits(
      File "/usr/local/lib/python3.8/dist-packages/pyctcdecode/decoder.py", line 329, in _decode_logits
        language_model = BeamSearchDecoderCTC.model_container[self._model_key]
    KeyError: b'\xf0\xaaD\x92+\x90\x16\xc9 \xf5,\xb4\x10\xb1y\x8e'
    
    opened by cleancoder7 13
  • Alphabet conversion from Hugging Faces do not work

    Alphabet conversion from Hugging Faces do not work

    Following the tutorial:

    from pyctcdecode import Alphabet, BeamSearchDecoderCTC
    
    vocab_dict = {'<pad>': 0, '<s>': 1, '</s>': 2, '<unk>': 3, '|': 4, 'E': 5, 'T': 6, 'A': 7, 'O': 8, 'N': 9, 'I': 10, 'H': 11, 'S': 12, 'R': 13, 'D': 14, 'L': 15, 'U': 16, 'M': 17, 'W': 18, 'C': 19, 'F': 20, 'G': 21, 'Y': 22, 'P': 23, 'B': 24, 'V': 25, 'K': 26, "'": 27, 'X': 28, 'J': 29, 'Q': 30, 'Z': 31}
    
    # make alphabet
    vocab_list = list(vocab_dict.keys())
    # convert ctc blank character representation
    vocab_list[0] = ""
    # replace special characters
    vocab_list[1] = "⁇"
    vocab_list[2] = "⁇"
    vocab_list[3] = "⁇"
    # convert space character representation
    vocab_list[4] = " "
    # specify ctc blank char index, since conventially it is the last entry of the logit matrix
    alphabet = Alphabet.build_bpe_alphabet(vocab_list, ctc_token_idx=0)
    

    Results in:

    ValueError: Unknown BPE format for vocabulary. Supported formats are 1) ▁ for indicating a space and 2) ## for continuation of a word.
    

    I'm trying to use a HuggingFaces model with a KenLM decoding but I can't get past this point. Thanks in advance.

    opened by flariut 13
  • Bpe vocabulary alternative format

    Bpe vocabulary alternative format

    Hi, First of all thanks for this great work, I did not expect python to be this fast for such tasks. I am trying to use the decoder with logits of BPE vocabulary, But my BPE notation is different than yours. Example: I_ am_ hap py_ to_ be_ he re_ the impletation you provide seem to handle notations where with leading space subwords, mine seem to be the inverse, it adds traling space subwords. I tried modifying the code to make it work, but i had no success so far. Is this something you could consider adding as a feature ? if not can you please help me make it work ( which parts of the code should be modified to make this possible, mainly the decoder.py file) Thanks in advance for your help.

    enhancement 
    opened by loquela-dev 10
  • Is there any literature or reference about this implementation?

    Is there any literature or reference about this implementation?

    The code you contributed does not seem to be ctc prefix beam search algorithm. Is there any literature or reference about this shallow fusion implementation?

    opened by lyjzsyzlt 10
  • Transcription being concatenated oddly

    Transcription being concatenated oddly

    I am trying to use the ctc decoding feature with kenlm on the wav2vec2 huggingface's logits

    vocab = ['l', 'z', 'u', 'k', 'f', 'r', 'g', 'i', 'v', 's', 'o', 'b', 'w', 'e', 'd', 'n', 'y', 'c', 'q', 'p', 'h', 't', 'a', 'x', ' ', 'j', 'm', '⁇', '', '⁇', '⁇']
    alphabet = Alphabet.build_alphabet(vocab, ctc_token_idx=-3)
    # Language Model
    lm=LanguageModel(kenlm_model,alpha =0.169,
      beta = 0.055)
    # build the decoder and decode the logits
    decoder = BeamSearchDecoderCTC(alphabet,lm)
    

    which returns the following output with beam size 64:

    yeah jon okay i m calling from the clinic the family doctor clinessegryand this number six four five five one three o five

    while when I was previously decoding with https://github.com/ynop/py-ctc-decode with the same lm and parameters getting:

    yeah on okay i am calling from the clinic the family dot clinic try and this number six four five five one three o five

    I don't understand why the words are being concatenated together. Do you have any thoughts?

    opened by usmanfarooq619 10
  • Difficulty seeing meaningful changes with hotword boosting

    Difficulty seeing meaningful changes with hotword boosting

    I am trying to test hotword boosting on a model meant to diagnose pronunciation mistakes, so the tokens are in IPA (international phonetic alphabet), but otherwise everything should work the same.

    I have two related issues.

    1. I'm having trouble getting the hotword to change the result at all, even when using insane hotword weights like 9999999.0. Any ideas why this might be happening?
    2. I can occasionally get the result to change, but I have an example below where the inclusion of a hotword changes a word in the result, but it doesn't output the hotword. Model output before CTCDecode: ðɪs wɪl bi dɪskʌst wɪð ɪndʌstɹi (this will be discussed with industry) Hotword used: dɪskʌsd (changing t for d) Model output after CTCDecode: ðɪs wɪl bi dɪskʌs wɪð ɪndʌstɹi (the t at the end of 'dɪskʌs' disappears)

    I didn't think this was possible based on how hotword boosting works? Am I misunderstanding or is this potentially a bug?

    Env info

    pyctcdecode 0.1.0
    numpy 1.21.0
    Non BPE model
    No LM
    

    Code

    
    # Change from 1 x classes x lengths to length x classes
    probabilities = probabilities.transpose(1, 2).squeeze(0)
    decoder = build_ctcdecoder(labels)
    hotwords = ["wɪd", "dɪskʌsd"]
    text = decoder.decode(probabilities.detach().numpy(), hotwords=hotwords, hotword_weight=1000.0)
    
    print(text)
    
    enhancement 
    opened by rbracco 9
  • Using Nemo with BPE models

    Using Nemo with BPE models

    Hello,

    Great repo! The tutorial for nemo models is working fine, but it seems when going to a BPE model (like the recent conformer one available in nemo), there is some trick changing the alphabet done in nemo, but not in pyctcdecode.

    https://github.com/NVIDIA/NeMo/blob/acbd88257f20e776c09f5015b8a793e1bcfa584d/scripts/asr_language_modeling/ngram_lm/eval_beamsearch_ngram.py#L112

    When trying to run something similar to the nemo notebook all the tokens seem shifted that's why I guess it's related to this token offset.

    Thanks

    bug 
    opened by pehonnet 5
  • Using nemo language models

    Using nemo language models

    Hello, We are using your package with nemo’s Conformer-CTC as the acoustic model, and a language model that was trained using nemo’s script train_kenlm.py. When running your beam-search decoder only with the conformer, it works great. But When we are trying to run it with the language model we’re getting poor results (very high WER), which are worst than running without the LM. To our understanding, nemo’s train_kenlm.py script creates a LM using the conformer’s tokenizer and performs a ‘trick’ that encodes the sub-word tokens of the training data as unicode characters with an offset in the unicode table. As a result, in nemo’s script for beam search decoder, they perform the same ‘trick’ on the vocabulary before the beam search itself, and convert the output text from unicodes to the original tokens. We’re afraid that it might be the reason for our results. We would really appreciate it if you could instruct us how to use nemo’s conformer with a language model that was trained using nemo’s train_kenlm.py script. In addition, when exploring the language model issue, we noticed that your beam search decoder can run with nemo’s conformer together with any KenLM language model, even ones that were created with a different tokenizer than the conformer. Isn’t the LM scoring being performed on the tokens? If so, how is it possible if the tokens of the conformer and the language model are different? Thanks

    opened by ntaiblum 4
  • How do I install kenlm on windows?

    How do I install kenlm on windows?

    Hey I installed pyctcdecode using pip install pyctcdecode and it worked and now I'm reading the quickstart and the first line is failing at import kenlm with the error: ModuleNotFoundError: No module named 'kenlm' and when I run from pyctcdecode import build_ctcdecoder I get a hint: kenlm python bindings are not installed. Most likely you want to install it using: pip install https://github.com/kpu/kenlm/archive/master.zip

    but when I try to execute pip install https://github.com/kpu/kenlm/archive/master.zip it fails

    any help on that matter will be super useful, thanks

    opened by burgil 4
  • How are partial hypotheses managed ?

    How are partial hypotheses managed ?

    Hi there!

    May I ask how partial hypotheses are handled in your n-gram rescoring implementation? For instance, what if the AM outputs BPE tokens while the n-gram LM is at the word level? How is rescoring performed to ensure that all hypotheses are checked and the rescoring isn't applied only once the first space token is encountered?

    Thanks!

    opened by TParcollet 4
  • decode_beams word timestamps are not always recognized

    decode_beams word timestamps are not always recognized

    Hi,

    I've been testing using pyctcdecode with nemo models. We're mainly interested in getting the timestamps of the specific words while using conformers, and your implementation of this seems very useful for that!

    However, it seems that when using nemo models, many words don't have their timestamps recognized properly.

    When loading our model and decoding using this for example:

    asr_model = nemo_asr.models.EncDecCTCModelBPE.restore_from('our_model_path')
    decoder = build_ctcdecoder(asr_model.decoder.vocabulary)
    logits = asr_model.transcribe([file_path], logprobs=True)[0]
    text = decoder.decode_beams(logits)[0]
    

    we get a lot of words that have -1 values for the start or end indices (mostly for the start index). On a benchmark we did, about 30% of the start indices were not recognized, and around 0.5% of the end indices. This is despite the fact that the overall performance was quite good with 13.23 WER (before using a language model).

    When using this however: asr_model = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name='QuartzNet15x5Base-En') all the words in the benchmark have values for the start and end (no -1 at all).

    The problem is reproducible with other pre-trained models, for example: asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained(model_name='stt_en_conformer_ctc_small') also have missed indices.

    The word output of these models is below.

    Your input would be very appreciated. Thanks!

    asr_model = nemo_asr.models.EncDecCTCModel.from_pretrained(model_name='QuartzNet15x5Base-En')

    [('hello', (78, 88)), ('this', (110, 115)), ('is', (120, 123)), ('a', (128, 129)), ('teft', (133, 142)), ('recording', (150, 170)), ('to', (185, 188)), ('tap', (194, 200)), ('the', (214, 217)), ('new', (222, 227)), ('application', (238, 265)), ('the', (452, 455)), ('number', (461, 472)), ('ieve', (484, 489)), ('one', (514, 519)), ('to', (524, 527)), ('three', (539, 547)), ('four', (560, 566)), ('three', (596, 603)), ('to', (612, 615)), ('one', (628, 634)), ('thank', (691, 697)), ('you', (702, 705)), ('and', (713, 717)), ('goodbyne', (723, 740))]

    asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained(model_name='stt_en_conformer_ctc_small')

    [('hello', (38, 51)), ('this', (-1, 56)), ('is', (-1, 59)), ('a', (-1, 62)), ('test', (65, 71)), ('recording', (75, 88)), ('to', (-1, 93)), ('test', (97, 103)), ('the', (-1, 107)), ('new', (110, 114)), ('application', (116, 132))]

    asr_model = nemo_asr.models.EncDecCTCModelBPE.restore_from('our_model_path')

    [('hello', (39, 51)), ('this', (-1, 57)), ('is', (-1, 60)), ('a', (-1, 63)), ('test', (66, 73)), ('recording', (76, 89)), ('to', (-1, 94)), ('test', (97, 104)), ('the', (-1, 107)), ('new', (111, 115)), ('application', (117, 223)), ('the', (-1, 226)), ('number', (228, 240)), ('is', (-1, 252)), ('one', (256, 259)), ('two', (260, 266)), ('three', (268, 276)), ('four', (278, 295)), ('three', (297, 303)), ('two', (304, 311)), ('one', (315, 341)), ('thank', (343, 348)), ('you', (-1, 354)), ('and', (-1, 358)), ('goodbye', (361, 371))]

    opened by ntaiblum 4
  • pyctcdecode not working with Nemo finetuned model

    pyctcdecode not working with Nemo finetuned model

    Hi All, I am working on pyctcdecode integration with Nemo ASR models. It works very well (without errors) for pre-trained nemo models like "stt_en_conformer_ctc_small" in below code snippet:

    import nemo.collections.asr as nemo_asr myFile=['sample-in-Speaker_1-11.wav'] asr_model = nemo_asr.models.EncDecCTCModelBPE.from_pretrained( model_name='stt_en_conformer_ctc_small') logits = asr_model.transcribe(myFile, logprobs=True)[0] print((logits.shape, len(asr_model.decoder.vocabulary))) decoder = build_ctcdecoder(asr_model.decoder.vocabulary) decoder.decode(logits)

    The same code snippet fails, if I use a fine-tuned nemo model in place of pretrained model. The error says, "ValueError: Input logits shape is (36, 513), but vocabulary is size 512. Need logits of shape: (time, vocabulary)" The fine-tuned model is loaded as below: asr_model = nemo_asr.models.EncDecCTCModelBPE.restore_from(restore_path="<path to fine-tuned model>")

    Pls suggest @gkucsko @lopez86 . Thanks

    opened by manjuke 0
  • Return the integer token ids along with text in decode_beams()

    Return the integer token ids along with text in decode_beams()

    Decode beams method right now returns a tuple of info, and that includes the decoded text. However, for many purposes detailed below, we require the actual token ids that instead of the text to be returned.

    With NeMo's decoding framework, it abstracts away how tokens are encoded and decoded, because we can map indidual token ids to their corresponding decoding step. For example

    A char model can emit tokens [0, 1, 2, 3] and we can do a simple dictionary lookup mapping it to [' ', 'a', 'b', 'c'] A subword model can emit tokens [0, 1, 2, ] and we can map it with a Sentencepiece detokenization step to [, 'a', 'b', 'c']

    Given token ids, we can perform much more careful decoding strategies. But right now that is not possible since only text is returned (or word frames - but again, subwords dont correspond to word frames).

    Given token ids, we can further perform accurate word merging with our own algorithms.

    Can the explicit integer ids be returned ?

    FYI @tango4j

    opened by titu1994 0
  • Filter certain paths from the beam search

    Filter certain paths from the beam search

    Hi everyone,

    I have a use-case for which, based on some external context-based knowledge, I need to filter out certain paths from the beam search so that we both avoid to have them in the final output and we also let the beam search explore unconstrained paths. Since it doesn't seem to me that something similar is already planned or implemented I was wondering if after modifying the code for myself I could also open a Pull Request to add such a feature or it would not be of interest for the purposes of the library.

    Anyway, thanks for the great work!

    opened by andrea-gasparini 0
  • UnicodeDecodeError: 'charmap' codec can't decode byte

    UnicodeDecodeError: 'charmap' codec can't decode byte

    Hello,

    When I try to load my KenLM model using the load_from_dir method on Windows, I got a

    UnicodeDecodeError: 'charmap' codec can't decode byte 0x8f in position 1703: character maps to <undefined>

    It seems that adding an encoding="utf8" parameter on line 376 of language_model.py solve this problem.

    opened by GaetanBaert 0
  • Add support for LM from transformers

    Add support for LM from transformers

    Hi, want to say that You implemented a great package

    It's block structure lets to assume that except of LanguageModel it can support many others

    For example adding support for AutoModelForCausalLM will extend it to numbers of awailable models from huggingface

    From GPT2, OPT to BERT and XGLM

    opened by Theodotus1243 2
Owner
Kensho
Technlogy that brings transparency to complex systems
Kensho
Black for Python docstrings and reStructuredText (rst).

Style-Doc Style-Doc is Black for Python docstrings and reStructuredText (rst). It can be used to format docstrings (Google docstring format) in Python

Telekom Open Source Software 13 Oct 24, 2022
A Flask Sentiment Analysis API, with visual implementation

The Sentiment Analysis Api was created using python flask module,it allows users to parse a text or sentence throught the (?text) arguement, then view the sentiment analysis of that sentence. It can

Ifechukwudeni Oweh 10 Jul 17, 2022
(ACL 2022) The source code for the paper "Towards Abstractive Grounded Summarization of Podcast Transcripts"

Towards Abstractive Grounded Summarization of Podcast Transcripts We provide the source code for the paper "Towards Abstractive Grounded Summarization

10 Jul 01, 2022
Negative sampling for solving the unlabeled entity problem in NER. ICLR-2021 paper: Empirical Analysis of Unlabeled Entity Problem in Named Entity Recognition.

Negative Sampling for NER Unlabeled entity problem is prevalent in many NER scenarios (e.g., weakly supervised NER). Our paper in ICLR-2021 proposes u

Yangming Li 128 Dec 29, 2022
CCF BDCI BERT系统调优赛题baseline(Pytorch版本)

CCF BDCI BERT系统调优赛题baseline(Pytorch版本) 此版本基于Pytorch后端的huggingface进行实现。由于此实现使用了Oneflow的dataloader作为数据读入的方式,因此也需要安装Oneflow。其它框架的数据读取可以参考OneflowDataloade

Ziqi Zhou 9 Oct 13, 2022
Pre-training with Extracted Gap-sentences for Abstractive SUmmarization Sequence-to-sequence models

PEGASUS library Pre-training with Extracted Gap-sentences for Abstractive SUmmarization Sequence-to-sequence models, or PEGASUS, uses self-supervised

Google Research 1.4k Dec 22, 2022
Official Stanford NLP Python Library for Many Human Languages

Official Stanford NLP Python Library for Many Human Languages

Stanford NLP 6.4k Jan 02, 2023
Awesome-NLP-Research (ANLP)

Awesome-NLP-Research (ANLP)

Language, Information, and Learning at Yale 72 Dec 19, 2022
TaCL: Improve BERT Pre-training with Token-aware Contrastive Learning

TaCL: Improve BERT Pre-training with Token-aware Contrastive Learning

Yixuan Su 26 Oct 17, 2022
Mapping a variable-length sentence to a fixed-length vector using BERT model

Are you looking for X-as-service? Try the Cloud-Native Neural Search Framework for Any Kind of Data bert-as-service Using BERT model as a sentence enc

Han Xiao 11.1k Jan 01, 2023
Speech Recognition Database Management with python

Speech Recognition Database Management The main aim of this project is to recogn

Abhishek Kumar Jha 2 Feb 02, 2022
DVC-NLP-Simple-usecase

dvc-NLP-simple-usecase DVC NLP project Reference repository: official reference repo DVC STUDIO MY View Bag of Words- Krish Naik TF-IDF- Krish Naik ST

SUNNY BHAVEEN CHANDRA 2 Oct 02, 2022
Fast, DB Backed pretrained word embeddings for natural language processing.

Embeddings Embeddings is a python package that provides pretrained word embeddings for natural language processing and machine learning. Instead of lo

Victor Zhong 212 Nov 21, 2022
Lattice methods in TensorFlow

TensorFlow Lattice TensorFlow Lattice is a library that implements constrained and interpretable lattice based models. It is an implementation of Mono

504 Dec 20, 2022
Twewy-discord-chatbot - Build a Discord AI Chatbot that Speaks like Your Favorite Character

Build a Discord AI Chatbot that Speaks like Your Favorite Character! This is a Discord AI Chatbot that uses the Microsoft DialoGPT conversational mode

Lynn Zheng 231 Dec 30, 2022
A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You can find two approaches for achieving this in this repo.

multitask-learning-transformers A simple recipe for training and inferencing Transformer architecture for Multi-Task Learning on custom datasets. You

Shahrukh Khan 48 Jan 02, 2023
Crowd sourced training data for Rasa NLU models

NLU Training Data Crowd-sourced training data for the development and testing of Rasa NLU models. If you're interested in grabbing some data feel free

Rasa 169 Dec 26, 2022
Vad-sli-asr - A Python scripts for a speech processing pipeline with Voice Activity Detection (VAD)

VAD-SLI-ASR Python scripts for a speech processing pipeline with Voice Activity

Dynamics of Language 14 Dec 09, 2022
COVID-19 Related NLP Papers

COVID-19 outbreak has become a global pandemic. NLP researchers are fighting the epidemic in their own way.

xcfeng 28 Oct 30, 2022
(ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.

BERT Convolutions Code for the paper Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models. Contains expe

mlpc-ucsd 21 Jul 18, 2022