🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy

Overview

spacy-transformers: Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy

This package provides spaCy components and architectures to use transformer models via Hugging Face's transformers in spaCy. The result is convenient access to state-of-the-art transformer architectures, such as BERT, GPT-2, XLNet, etc.

This release requires spaCy v3. For the previous version of this library, see the v0.6.x branch.

Azure Pipelines PyPi GitHub Code style: black

Features

  • Use pretrained transformer models like BERT, RoBERTa and XLNet to power your spaCy pipeline.
  • Easy multi-task learning: backprop to one transformer model from several pipeline components.
  • Train using spaCy v3's powerful and extensible config system.
  • Automatic alignment of transformer output to spaCy's tokenization.
  • Easily customize what transformer data is saved in the Doc object.
  • Easily customize how long documents are processed.
  • Out-of-the-box serialization and model packaging.

🚀 Installation

Installing the package from pip will automatically install all dependencies, including PyTorch and spaCy. Make sure you install this package before you install the models. Also note that this package requires Python 3.6+, PyTorch v1.5+ and spaCy v3.0+.

pip install spacy[transformers]

For GPU installation, find your CUDA version using nvcc --version and add the version in brackets, e.g. spacy[transformers,cuda92] for CUDA9.2 or spacy[transformers,cuda100] for CUDA10.0.

If you are having trouble installing PyTorch, follow the instructions on the official website for your specific operation system and requirements, or try the following:

pip install spacy-transformers -f https://download.pytorch.org/whl/torch_stable.html

📖 Documentation

⚠️ Important note: This package has been extensively refactored to take advantage of spaCy v3.0. Previous versions that were built for spaCy v2.x worked considerably differently. Please see previous tagged versions of this README for documentation on prior versions.

Comments
  • Use ModelOutput instead of tuples

    Use ModelOutput instead of tuples

    • Save model output as ModelOutput instead of a list of tensors in TransformerData.model_output and FullTransformerBatch.model_output.

      • For backwards compatibility with transformers v3 set return_dict = True in the transformer config.
    • TransformerData.tensors and FullTransformerBatch.tensors return ModelOutput.to_tuple().

    • Store any additional model output as ModelOutput in TransformerData.model_output.

      • Save all torch.Tensor and tuple(torch.Tensor) values in TransformerData.model_output for cases where tensor.shape[0] is the batch size so that it's possible to slice the output for individual docs.
        • Includes: pooler_output, hidden_states, attentions, and cross_attentions
    • Re-enable tests for gpt2 and xlnet in the CI.

    • Following #285, include some minor modifications and bug fixes for HFShim and HFObjects

      • Rename the temporary init-only configs in HFObjects and don't serialize them in HFShim once the model is initialized
    enhancement v1.1 
    opened by adrianeboyd 14
  • Add support for mixed-precision training

    Add support for mixed-precision training

    This change makes it possible to use and configure the support for mixed-precision training that was added to thinc.

    Example configuration:

    [components.transformer.model]
    @architectures = "spacy-transformers.TransformerModel.v3"
    name = "roberta-base"
    mixed_precision = true
    
    enhancement perf / memory perf / speed 
    opened by danieldk 7
  • HFShim: support MPS device

    HFShim: support MPS device

    Before this change, two devices and map locations were supported:

    • CUDA: cuda:N
    • CPU: cpu

    This change adds support for other devices like Metal Performance Shader (MPS) devices by mapping to the active Torch device.

    opened by danieldk 6
  • added transformers_config for passing arguments to the transformer

    added transformers_config for passing arguments to the transformer

    Added transformers_config to allow the user to pass arguments to the transformers forward pass. Most notably the output_attentions.

    for convenience, I used this example to test the code:

    import spacy
    
    nlp = spacy.blank("en")
    
    # Construction via add_pipe with custom config
    config = {
        "model": {
            "@architectures": "spacy-transformers.TransformerModel.v1",
            "name": "bert-base-uncased",
            "transformers_config": {"output_attentions": True},
        }
    }
    transformer = nlp.add_pipe(
        "transformer", config=config)
    transformer.model.initialize()
    
    
    doc = nlp("This is a sentence.")
    

    which gives you:

    len(doc._.trf_data.attention) # 12
    doc._.trf_data.attention[-1].shape  # (1, 12, 7, 7) last layer of attention 
    len(doc._.trf_data.tensors) # 2 
    doc._.trf_data.tensors[0].shape # (1, 7, 768) <-- wordpiece embedding
    doc._.trf_data.tensors[1].shape  # (1, 768) <-- assuming this is the pooled embedding?
    

    Sidenote: it took me quite a while to find the default config str. It might be ideal to make this into a standalone file and load it in?

    enhancement 
    opened by KennethEnevoldsen 6
  • Add kwargs features of `from_pretrained()` in HFShim.from_bytes()

    Add kwargs features of `from_pretrained()` in HFShim.from_bytes()

    We're trying to give some parameters, such as use_fast and trsut_remote_code, to AutoTokenizer.from_pretrained() via kwargs in order to utilize custom tokenizers in spacy-transformers. In the current implementation of spacy-transformers, HFShim.from_bytes() does not apply kwargs to either AutoConfig.from_pretrained() or AutoTokenizer.from_pretrained(), while huggingface_from_pretrained() applies kwargs to both of them to create a transformer model.

    https://github.com/huggingface/transformers/blob/dcb08b99f44919425f8ba9be9ddcc041af8ec25e/src/transformers/models/auto/tokenization_auto.py https://github.com/huggingface/transformers/blob/dcb08b99f44919425f8ba9be9ddcc041af8ec25e/src/transformers/models/auto/configuration_auto.py#L679 https://github.com/explosion/spacy-transformers/blob/cab060701fe2bad0c9ae23a822249c9bebb56da7/spacy_transformers/layers/hf_shim.py#L98 https://github.com/explosion/spacy-transformers/blob/cab060701fe2bad0c9ae23a822249c9bebb56da7/spacy_transformers/layers/transformer_model.py#L256

    In this pull request, we used _init_tokenizer_config and _init_transformer_config in msg passed from the deserializer as kwargs parameter of from_pretrained().

    Another possible approach is to write these kwargs in the transformer/cfg.

    opened by hiroshi-matsuda-rit 4
  • Convert token identifiers to Long for torch < 1.8.0

    Convert token identifiers to Long for torch < 1.8.0

    Since PyTorch 1.8.0 token identifiers can be both Int and Long for embedding lookups. Prior versions only support Long. Since we still support older versions, convert token identifiers to Long for compatibility.

    This fixes the incompatibility with older versions introduced in #289.

    opened by danieldk 4
  • Fixing Transformer IO

    Fixing Transformer IO

    Note: some tests are adjusted in this PR. This was done first, before implementing the code changes, as we were aware that the initialize statements shouldn't be there, cf https://github.com/explosion/spaCy/issues/8319 and https://github.com/explosion/spaCy/issues/8566.

    Description

    Before this PR, the HF Transformer model was loaded through set_pytorch_transformer (stored in model.attrs["set_transformer"]), but this happened in the initialize call of TransformerModel. Unfortunately, this meant that saving/loading a transformer-based pipeline was kind of broken, as you needed to call initialize on a previously trained pipeline, which isn't the normal spaCy API. This also broke the from_bytes / to_bytes typical API.

    Furthermore, the from_disk / to_disk functionality worked with a "listener" transformer, because the transformer pipeline component saved out the PyTorch files directly. However, this solution did not work for the "inline" Transformer, because the TransformerModel would be used directly and not via the pipeline component.

    I've been looking at various different solutions, but the proposal in this PR is the only one that I got working for all use-cases: basically we need to load/define the transformer model in the constructor as we do for any other spaCy component.

    Unfortunately with this proposal, if you'd have the initialize calls as before in your old code, it will crash, complaining about the fact that the transformer model can't be set twice. I think this is actually the correct behaviour in the end, but it might break some people's code. The fix is obvious/easy though.

    But we'll have to discuss which version bump we want to do when releasing this.

    Fixes https://github.com/explosion/spaCy/issues/8319 Fixes https://github.com/explosion/spaCy/issues/8566

    bug feat / serialize 
    opened by svlandeg 4
  • IO for transformer component

    IO for transformer component

    IO

    Been going in circles a bit with this, trying to puzzle it into the IO mechanisms we decided on for the config refactor for spaCy 3 ...

    This PR: Transformer(Pipe) knows how to do to_disk and from_disk and stores the internal tokenizer & transformer object using huggingface transformer standard IO mechanisms. In the nlp/transformer output directory, this results in a folder model with files

    • config.json
    • pytorch_model.bin
    • special_tokens_map.json
    • tokenizer_config.json
    • vocab.txt

    This folder can be read using the spacy.TransformerFromFile.v1 architecture for the model, and then calling from_disk on the pipeline component (which happens automatically when reading the nlp object from a config)

    If users want to download a model by using architecture spacy.TransformerByName.v2, then when calling nlp.to_disk, we need to do a little hack rewriting that architecture to the one from file. This is done by directly modifying nlp.config when the component is created with from_nlp. This feels hacky, but not sure how else to prevent multiple downloads.

    Other fixes

    • fixed the config files to be up-to-date with the latest version of the v3 branch
    • moved install_extensions to the init of the transformer pipe, where I think it makes more sense. Added force=True to prevent warnings/errors when calling it multiple times (I don't think that matters?)
    enhancement feat / serialize 
    opened by svlandeg 4
  • WIP: Fix IO after init_model

    WIP: Fix IO after init_model

    To create a distilbert-base-german-cased model with the init_model.py script, I had to add two additional fields to the serialization code.

    Fixes #117

    bug 
    opened by svlandeg 4
  •  Transformer: add update_listeners_in_predict option

    Transformer: add update_listeners_in_predict option

    Draft: still needs docs, but I first wanted to discuss this proposal.

    The output of a transformer is passed through in two different ways:

    • Prediction: the data is passed through the Doc._.trf_data attribute.
    • Training: the data is broadcast directly to the transformer's listeners.

    However, the Transformer.predict method breaks the strict separation between training and prediction by also broadcasting transformer outputs to its listeners. This was added (I think) to support training with a frozen transformer.

    However, this breaks down when we are training a model with an unfrozen transformer when the transformer is also in annotating_components. The transformer will first (as part of its update step) broadcast the tensors and backprop function to its listeners. However, then when acting as an annotating component, it would immediately override its own output and clear the backprop function. As a result, gradients will not flow into the transformer.

    This change fixes this issue by adding the update_listeners_in_predict option, which is enabled by default. When this option is disabled, the tensors will not be broadcast to listeners in predict.


    Alternatives considered:

    • Yanking the listener code from predict: breaks our current semantics, would make it harder to train with a frozen transformer.
    • Checking in the listener if the tensors that we are receiving is the same batch ID as we already have. Don't update if we already have the same batch with a backprop function. I thought this is a bit fragile, because it breaks when batching differs between training and prediction (?).
    bug feat / pipeline 
    opened by danieldk 3
  • Support offset mapping alignment for fast tokenizers

    Support offset mapping alignment for fast tokenizers

    Switch to offset mapping-based alignment for fast tokenizers. With this change, slow vs. fast tokenizers will not give identical results with spacy-transformers.

    Additional modifications:

    • Update package setup for cython
    • Update CI for compiled package
    feat / alignment 
    opened by adrianeboyd 3
  • Add test for textcat CNN issue

    Add test for textcat CNN issue

    This is a test demonstrating the issue in https://github.com/explosion/spaCy/issues/11968. A potential fix is being worked on in https://github.com/explosion/thinc/pull/820.

    In its current condition, the test just creates a pipeline with textcat and transformer, creates a minimal doc and calls nlp.initialize. As described in the spaCy issue, that fails with this error:

    ValueError: Cannot get dimension 'nI' for model 'linear': value unset
    

    This will be left in draft until the fix is clarified.

    tests 
    opened by polm 7
  • Transformer.predict: do not broadcast to listeners

    Transformer.predict: do not broadcast to listeners

    The output of a transformer is passed through in two different ways:

    • Prediction: the data is passed through the Doc._.trf_data attribute.
    • Training: the data is broadcast directly to the transformer's listeners.

    However, the Transformer.predict method breaks the strict separation between training and prediction by also broadcasting transformer outputs to its listeners.

    However, this breaks down when we are training a model with an unfrozen transformer when the transformer is also in annotating_components. The transformer will first (as part of its update step) broadcast the tensors and backprop function to its listeners. However, then when acting as an annotating component, it would immediately override its own output and clear the backprop function. As a result, gradients will not flow into the transformer.

    This change removes the broadcast from the predict method. If a listener does not receive a batch, attempt to get the transformer output from the Doc instances. This makes it possible to train a pipeline with a frozen transformer.

    This ports https://github.com/explosion/spaCy/pull/11385 to spacy-transformers. Alternative to #342.

    bug feat / pipeline 
    opened by danieldk 0
Releases(v1.1.9)
  • v1.1.9(Dec 19, 2022)

    • Extend support for transformers up to v4.25.x.
    • Add support for Python 3.11 (currently limited to linux due to supported platforms for PyTorch v1.13.x).
    Source code(tar.gz)
    Source code(zip)
  • v1.1.8(Aug 12, 2022)

    • Extend support for transformers up to v4.21.x.
    • Support MPS device in HFShim (#328).
    • Track seen docs during alignment to improve speed (#337).
    • Don't require examples in Transformer.initialize (#341).
    Source code(tar.gz)
    Source code(zip)
  • v1.1.7(Aug 25, 2022)

    • Extend support for transformers up to v4.20.x.
    • Convert all transformer outputs to XP arrays at once (#330).
    • Support alternate model loaders in HFShim and HFWrapper (#332).
    Source code(tar.gz)
    Source code(zip)
  • v1.1.6(Jun 2, 2022)

    • Extend support for transformers up to v4.19.x.
    • Fix issue #324: Skip backprop for transformer if not available, for example if the transformer is frozen.
    Source code(tar.gz)
    Source code(zip)
  • v1.1.5(Mar 15, 2022)

  • v1.1.4(Jan 14, 2022)

  • v1.1.3(Dec 7, 2021)

  • v1.1.2(Oct 28, 2021)

  • v1.1.1(Oct 19, 2021)

    🔴 Bug fixes

    • Fix #309: Fix parameter ordering and defaults for new parameters in TransformerModel architectures.
    • Fix #310: Fix config and model issues when replacing listeners.

    👥 Contributors

    @adrianeboyd, @svlandeg

    Source code(tar.gz)
    Source code(zip)
  • v1.1.0(Oct 18, 2021)

    ✨ New features and improvements

    • Refactor and improve transformer serialization for better support of inline transformer components and replacing listeners.
    • Provide the transformer model output as ModelOutput instead of tuples in TransformerData.model_output and FullTransformerBatch.model_output. For backwards compatibility, the tuple format remains available under TransformerData.tensors and FullTransformerBatch.tensors. See more details in the transformer API docs.
    • Add support for transformer_config settings such as output_attentions. Additional output is stored under TransformerData.model_output. More details in the TransformerModel docs.
    • Add support for mixed-precision training.
    • Improve training speed by streamlining allocations for tokenizer output.
    • Extend support for transformers up to v4.11.x.

    🔴 Bug fixes

    • Fix support for GPT2 models.

    ⚠️ Backwards incompatibilities

    • The serialization format for transformer components has changed in v1.1 and is not compatible with spacy-transformers v1.0.x. Pipelines trained with v1.0.x can be loaded with v1.1.x, but pipelines saved with v1.1.x cannot be loaded with v1.0.x.
    • TransformerData.tensors and FullTransformerBatch.tensors return a tuple instead of a list.

    👥 Contributors

    @adrianeboyd, @bryant1410, @danieldk, @honnibal, @ines, @KennethEnevoldsen, @svlandeg

    Source code(tar.gz)
    Source code(zip)
  • v1.0.6(Sep 2, 2021)

  • v1.0.5(Aug 26, 2021)

  • v1.0.4(Aug 12, 2021)

    • Extend transformers support to <4.10.0
    • Enable pickling of span getters and annotation setters, which is required for multiprocessing with spawn
    Source code(tar.gz)
    Source code(zip)
  • v1.0.3(Jul 20, 2021)

  • v1.0.2(Apr 21, 2021)

    ✨ New features and improvements

    • Add support for transformers v4.3-v4.5
    • Add extra for CUDA 11.2

    🔴 Bug fixes

    • Fix #264, #265: Improve handling of empty docs
    • Fix #269: Add trf_data extension in Transformer.__call__ and Transformer.pipe to support distributed processing

    👥 Contributors

    Thanks to @bryant1410 for the pull requests and contributions!

    Source code(tar.gz)
    Source code(zip)
  • v1.0.1(Feb 2, 2021)

  • v1.0.0(Feb 1, 2021)

    This release requires spaCy v3.

    ✨ New features and improvements

    • Rewrite library from scratch for spaCy v3.0.
    • Transformer component for easy pipeline integration.
    • TransformerListener to share transformer weights between components.
    • Built-in registered functions that are available in spaCy if spacy-transformers is installed in the same environment.

    📖 Documentation

    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc2(Jan 19, 2021)

    🌙 This release is a pre-release and requires spaCy v3 (nightly).

    ✨ New features and improvements

    • Add support for Python 3.9
    • Add support for transformers v4

    🔴 Bug fixes

    • Fix #230: Add upstream argument to TransformerListener.v1
    • Fix #238: Skip special tokens during alignment
    • Fix #246: Raise error if model max length exceeded
    Source code(tar.gz)
    Source code(zip)
  • v1.0.0rc0(Oct 15, 2020)

    🌙 This release is a pre-release and requires spaCy v3 (nightly).

    ✨ New features and improvements

    • Rewrite library from scratch for spaCy v3.0.
    • Transformer component for easy pipeline integration.
    • TransformerListener to share transformer weights between components.
    • Built-in registered functions that are available in spaCy if spacy-transformers is installed in the same environment.

    📖 Documentation

    Source code(tar.gz)
    Source code(zip)
  • v0.6.2(Jun 29, 2020)

  • v0.6.1(Jun 18, 2020)

    ⚠️ This release requires downloading new models.

    • Update spacy-transformers for spaCy v2.3
    • Update and extend supported transformers versions to >=2.4.0,<2.9.0
    • Use transformers.AutoConfig to support loading pretrained models from https://huggingface.co/models
    • #123: Fix alignment algorithm using pytokenizations

    Thanks to @tamuhey for the pull request!

    Source code(tar.gz)
    Source code(zip)
  • v0.5.3(Jun 18, 2020)

    Bug fixes related to alignment and truncation:

    • #191: Reset max_len in case of alignment error
    • #196: Fix wordpiecer truncation to be per sentence

    Enhancement:

    • #162: Let nlp.update handle Doc type inputs

    Thanks to @ZhuoruLin for the pull requests and helping us debug issues related to batching and truncation!

    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(May 24, 2020)

    Update to newer version of transformers.

    This library is being rewritten for spaCy v3, in order to improve its flexibility and performance and to make it easier to stay up to date with new transformer models. See here for details: https://github.com/explosion/spacy-transformers/pull/173

    Source code(tar.gz)
    Source code(zip)
  • v0.5.2(May 24, 2020)

  • v0.5.1(Oct 28, 2019)

    • Downgrade version pin of importlib_metadata to prevent conflict.
    • Fix issue #92: Fix index error when calculating doc.tensor.

    Thanks to @ssavvi for the pull request!

    Source code(tar.gz)
    Source code(zip)
  • v0.5.0(Oct 8, 2019)

    ⚠️ This release requires downloading new models. Also note the new model names that specify trf (transformers) instead of pytt (PyTorch transformers).

    • Rename package from spacy-pytorch-transformers to spacy-transformers.
    • Update to spacy>=2.2.0.
    • Upgrade to latest transformers.
    • Improve code and repo organization.
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Sep 4, 2019)

  • v0.3.0(Aug 27, 2019)

  • v0.2.0(Aug 12, 2019)

    • Add support for GLUE benchmark tasks.
    • Support text-pair classification. The specifics of this are likely to change, but you can see run_glue.py for current usage.
    • Improve reliability of tokenization and alignment.
    • Add support for segment IDs to the PyTT_Wrapper class. These can now be passed in as a second column of the RaggedArray input. See the model_registry.get_word_pieces function for example usage.
    • Set default maximum sequence length to 128.
    • Fix bug that caused settings not to be passed into PyTT_TextCategorizer on model initialization.
    • Fix serialization of XLNet model.
    Source code(tar.gz)
    Source code(zip)
  • v0.1.1(Aug 10, 2019)

Owner
Explosion
A software company specializing in developer tools for Artificial Intelligence and Natural Language Processing
Explosion
DeBERTa: Decoding-enhanced BERT with Disentangled Attention

DeBERTa: Decoding-enhanced BERT with Disentangled Attention This repository is the official implementation of DeBERTa: Decoding-enhanced BERT with Dis

Microsoft 1.2k Jan 03, 2023
OpenAI CLIP text encoders for multiple languages!

Multilingual-CLIP OpenAI CLIP text encoders for any language Colab Notebook · Pre-trained Models · Report Bug Overview OpenAI recently released the pa

Fredrik Carlsson 481 Dec 30, 2022
Pytorch-Named-Entity-Recognition-with-BERT

BERT NER Use google BERT to do CoNLL-2003 NER ! Train model using Python and Inference using C++ ALBERT-TF2.0 BERT-NER-TENSORFLOW-2.0 BERT-SQuAD Requi

Kamal Raj 1.1k Dec 25, 2022
FB ID CLONER WUTHOT CHECKPOINT, FACEBOOK ID CLONE FROM FILE

* MY SOCIAL MEDIA : Programming And Memes Want to contact Mr. Error ? CONTACT : [ema

Mr. Error 9 Jun 17, 2021
中文問句產生器;使用台達電閱讀理解資料集(DRCD)

Transformer QG on DRCD The inputs of the model refers to we integrate C and A into a new C' in the following form. C' = [c1, c2, ..., [HL], a1, ..., a

Philip 1 Oct 22, 2021
Library of deep learning models and datasets designed to make deep learning more accessible and accelerate ML research.

Tensor2Tensor Tensor2Tensor, or T2T for short, is a library of deep learning models and datasets designed to make deep learning more accessible and ac

12.9k Jan 07, 2023
NLP, before and after spaCy

textacy: NLP, before and after spaCy textacy is a Python library for performing a variety of natural language processing (NLP) tasks, built on the hig

Chartbeat Labs Projects 2k Jan 04, 2023
scikit-learn wrappers for Python fastText.

skift scikit-learn wrappers for Python fastText. from skift import FirstColFtClassifier df = pandas.DataFrame([['woof', 0], ['meow', 1]], colu

Shay Palachy 233 Sep 09, 2022
硕士期间自学的NLP子任务,供学习参考

NLP_Chinese_down_stream_task 自学的NLP子任务,供学习参考 任务1 :短文本分类 (1).数据集:THUCNews中文文本数据集(10分类) (2).模型:BERT+FC/LSTM,Pytorch实现 (3).使用方法: 预训练模型使用的是中文BERT-WWM, 下载地

12 May 31, 2022
Open source code for AlphaFold.

AlphaFold This package provides an implementation of the inference pipeline of AlphaFold v2.0. This is a completely new model that was entered in CASP

DeepMind 9.7k Jan 02, 2023
Cherche (search in French) allows you to create a neural search pipeline using retrievers and pre-trained language models as rankers.

Cherche (search in French) allows you to create a neural search pipeline using retrievers and pre-trained language models as rankers. Cherche is meant to be used with small to medium sized corpora. C

Raphael Sourty 224 Nov 29, 2022
Opal-lang - A WIP programming language based on Python

thanks to aphitorite for the beautiful logo! opal opal is a WIP transcompiled pr

3 Nov 04, 2022
This is the code for the EMNLP 2021 paper AEDA: An Easier Data Augmentation Technique for Text Classification

The baseline code is for EDA: Easy Data Augmentation techniques for boosting performance on text classification tasks

Akbar Karimi 81 Dec 09, 2022
Curso práctico: NLP de cero a cien 🤗

Curso Práctico: NLP de cero a cien Comprende todos los conceptos y arquitecturas clave del estado del arte del NLP y aplícalos a casos prácticos utili

Somos NLP 147 Jan 06, 2023
Repository for the paper: VoiceMe: Personalized voice generation in TTS

🗣 VoiceMe: Personalized voice generation in TTS Abstract Novel text-to-speech systems can generate entirely new voices that were not seen during trai

Pol van Rijn 80 Dec 29, 2022
💛 Code and Dataset for our EMNLP 2021 paper: "Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes"

Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes Official PyTorch implementation and EmoCause evaluatio

Hyunwoo Kim 50 Dec 21, 2022
ACL'22: Structured Pruning Learns Compact and Accurate Models

☕ CoFiPruning: Structured Pruning Learns Compact and Accurate Models This repository contains the code and pruned models for our ACL'22 paper Structur

Princeton Natural Language Processing 130 Jan 04, 2023
Outreachy TFX custom component project

Schema Curation Custom Component Outreachy TFX custom component project This repo contains the code for Schema Curation Custom Component made as a par

Robert Crowe 5 Jul 16, 2021
DeepAmandine is an artificial intelligence that allows you to talk to it for hours, you won't know the difference.

DeepAmandine This is an artificial intelligence based on GPT-3 that you can chat with, it is very nice and makes a lot of jokes. We wish you a good ex

BuyWithCrypto 3 Apr 19, 2022
Code examples for my Write Better Python Code series on YouTube.

Write Better Python Code This repository contains the code examples used in my Write Better Python Code series published on YouTube: https:/

858 Dec 29, 2022