🧪 Cutting-edge experimental spaCy components and features

Overview

spacy-experimental: Cutting-edge experimental spaCy components and features

This package includes experimental components and features for spaCy v3.x, for example model architectures, pipeline components and utilities.

Azure Pipelines pypi Version

Installation

Install with pip:

python -m pip install -U pip setuptools wheel
python -m pip install spacy-experimental

Using spacy-experimental

Components and features may be modified or removed in any release, so always specify the exact version as a package requirement if you're experimenting with a particular component, e.g.:

spacy-experimental==0.147.0

Then you can add the experimental components to your config or import from spacy_experimental:

[components.experimental_edit_tree_lemmatizer]
factory = "experimental_edit_tree_lemmatizer"

Components

Edit tree lemmatizer

[components.experimental_edit_tree_lemmatizer]
factory = "experimental_edit_tree_lemmatizer"
# token attr to use as backoff with the predicted trees are not applicable; null to leave unset
backoff = "orth"
# prune trees that are applied less than this frequency in the training data
min_tree_freq = 2
# whether to overwrite existing lemma annotation
overwrite = false
scorer = {"@scorers":"spacy.lemmatizer_scorer.v1"}
# try to apply at most the k most probable edit trees
top_k = 1

Trainable character-based tokenizers

Two trainable tokenizers represent tokenization as a sequence tagging problem over individual characters and use the existing spaCy tagger and NER architectures to perform the tagging.

In the spaCy pipeline, a simple "pretokenizer" is applied as the pipeline tokenizer to split each doc into individual characters and the trainable tokenizer is a pipeline component that retokenizes the doc. The pretokenizer needs to be configured manually in the config or with spacy.blank():

nlp = spacy.blank(
    "en",
    config={
        "nlp": {
            "tokenizer": {"@tokenizers": "spacy-experimental.char_pretokenizer.v1"}
        }
    },
)

The two tokenizers currently reset any existing tag or entity annotation respectively in the process of retokenizing.

Character-based tagger tokenizer

In the tagger version experimental_char_tagger_tokenizer, the tagging problem is represented internally with character-level tags for token start (T), token internal (I), and outside a token (O). This representation comes from Elephant: Sequence Labeling for Word and Sentence Segmentation (Evang et al., 2013).

This is a sentence.
TIIIOTIOTOTIIIIIIIT

With the option annotate_sents, S replaces T for the first token in each sentence and the component predicts both token and sentence boundaries.

This is a sentence.
SIIIOTIOTOTIIIIIIIT

A config excerpt for experimental_char_tagger_tokenizer:

[nlp]
pipeline = ["experimental_char_tagger_tokenizer"]
tokenizer = {"@tokenizers":"spacy-experimental.char_pretokenizer.v1"}

[components]

[components.experimental_char_tagger_tokenizer]
factory = "experimental_char_tagger_tokenizer"
annotate_sents = true
scorer = {"@scorers":"spacy-experimental.tokenizer_senter_scorer.v1"}

[components.experimental_char_tagger_tokenizer.model]
@architectures = "spacy.Tagger.v1"
nO = null

[components.experimental_char_tagger_tokenizer.model.tok2vec]
@architectures = "spacy.Tok2Vec.v2"

[components.experimental_char_tagger_tokenizer.model.tok2vec.embed]
@architectures = "spacy.MultiHashEmbed.v2"
width = 128
attrs = ["ORTH","LOWER","IS_DIGIT","IS_ALPHA","IS_SPACE","IS_PUNCT"]
rows = [1000,500,50,50,50,50]
include_static_vectors = false

[components.experimental_char_tagger_tokenizer.model.tok2vec.encode]
@architectures = "spacy.MaxoutWindowEncoder.v2"
width = 128
depth = 4
window_size = 4
maxout_pieces = 2

Character-based NER tokenizer

In the NER version, each character in a token is part of an entity:

T	B-TOKEN
h	I-TOKEN
i	I-TOKEN
s	I-TOKEN
 	O
i	B-TOKEN
s	I-TOKEN
	O
a	B-TOKEN
 	O
s	B-TOKEN
e	I-TOKEN
n	I-TOKEN
t	I-TOKEN
e	I-TOKEN
n	I-TOKEN
c	I-TOKEN
e	I-TOKEN
.	B-TOKEN

A config excerpt for experimental_char_ner_tokenizer:

[nlp]
pipeline = ["experimental_char_ner_tokenizer"]
tokenizer = {"@tokenizers":"spacy-experimental.char_pretokenizer.v1"}

[components]

[components.experimental_char_ner_tokenizer]
factory = "experimental_char_ner_tokenizer"
scorer = {"@scorers":"spacy-experimental.tokenizer_scorer.v1"}

[components.experimental_char_ner_tokenizer.model]
@architectures = "spacy.TransitionBasedParser.v2"
state_type = "ner"
extra_state_tokens = false
hidden_width = 64
maxout_pieces = 2
use_upper = true
nO = null

[components.experimental_char_ner_tokenizer.model.tok2vec]
@architectures = "spacy.Tok2Vec.v2"

[components.experimental_char_ner_tokenizer.model.tok2vec.embed]
@architectures = "spacy.MultiHashEmbed.v2"
width = 128
attrs = ["ORTH","LOWER","IS_DIGIT","IS_ALPHA","IS_SPACE","IS_PUNCT"]
rows = [1000,500,50,50,50,50]
include_static_vectors = false

[components.experimental_char_ner_tokenizer.model.tok2vec.encode]
@architectures = "spacy.MaxoutWindowEncoder.v2"
width = 128
depth = 4
window_size = 4
maxout_pieces = 2

The NER version does not currently support sentence boundaries, but it would be easy to extend using a B-SENT entity type.

Biaffine parser

A biaffine dependency parser, similar to that proposed in [Deep Biaffine Attention for Neural Dependency Parsing](Deep Biaffine Attention for Neural Dependency Parsing) (Dozat & Manning, 2016). The parser consists of two parts: an edge predicter and an edge labeler. For example:

[components.experimental_arc_predicter]
factory = "experimental_arc_predicter"

[components.experimental_arc_labeler]
factory = "experimental_arc_labeler"

The arc predicter requires that a previous component (such as senter) sets sentence boundaries during training. Therefore, such a component must be added to annotating_components:

[training]
annotating_components = ["senter"]

The biaffine parser sample project provides an example biaffine parser pipeline.

Architectures

None currently.

Other

Tokenizers

  • spacy-experimental.char_pretokenizer.v1: Tokenize a text into individual characters.

Scorers

  • spacy-experimental.tokenizer_scorer.v1: Score tokenization.
  • spacy-experimental.tokenizer_senter_scorer.v1: Score tokenization and sentence segmentation.

Bug reports and issues

Please report bugs in the spaCy issue tracker or open a new thread on the discussion board for other issues.

Older documentation

See the READMEs in earlier tagged versions for details about components in earlier releases.

Comments
  • Coref Components

    Coref Components

    This is a continuation of https://github.com/explosion/spaCy/pull/7264, since we decided to add the coref components here first. It's still a work in progress.

    enhancement 
    opened by polm 15
  • Add experimental Span Suggesters

    Add experimental Span Suggesters

    This PR adds three new experimental suggester functions for the spancat component and a spaCy project showcasing how to use them in a config.cfg file.

    Subtree Suggester:

    • Uses annotations from the Tagger and Parser to suggests subtrees of individual tokens

    Chunk Suggester:

    • Uses annotations from the Tagger and Parser to suggest noun_chunks

    Sentence Suggester:

    • Uses sentence boundaries to suggest sentences

    These suggesters also come with the ngram functionality which allows users to set a list of sizes for suggesting individual ngrams

    The spaCy project covers:

    • How to source components from existing models
    • How to use frozen_components & annotating_components
    • How to use custom suggester functions registered in the registry
    enhancement 
    opened by thomashacker 12
  • Span Finder Suggester

    Span Finder Suggester

    This PR adds a new experimental component for learning span boundaries and a custom suggester function for spancat. It further adds a spaCy project showcasing how to use the SpanFinder component on 3 different datasets (Healthsea, ToxicSpans, Genia) with 2 configurations (tok2vec & transformer). The project also provides the possibility to train spancat with ngram and compare it to SpanFinder with a custom evaluation script that calculates the performance and overall coverage of the suggester functions.

    Features

    • spaCy project for comparing SpanFinder vs Ngram
    • SpanFinder model
    • SpanFinder component
    • SpanFinder suggester
    • Unit tests for component, model and suggester
    enhancement 
    opened by thomashacker 10
  • Fix handling of small docs in coref

    Fix handling of small docs in coref

    Docs with one or zero tokens fail in the coref component. This doesn't have a fix yet, just a failing test. (There is also a test for the span resolver, which does not fail.)

    bug 
    opened by polm 2
  • Fix issue with resolving final token in SpanResolver

    Fix issue with resolving final token in SpanResolver

    The SpanResolver seems unable to include the final token in a Doc in output spans. It will even produce empty spans instead of doing so.

    This makes changes so that within the model span end indices are treated as inclusive, and converts them back to exclusive when annotating docs. This has been tested to work, though an automated test should be added.

    bug 
    opened by polm 2
  • Make coref entry points work without PyTorch

    Make coref entry points work without PyTorch

    Before this PR, in environments without PyTorch, using spacy experimental can fail due to attempts to load entry points. This change makes it so the types required for class definitions (torch.nn.Module and torch.Tensor) are stubbed to object when torch is not available.

    opened by polm 2
  • Fix device issue with indices in coref

    Fix device issue with indices in coref

    It looks like Torch 1.13.0 has some changes in the way devices are handled and can result in subtle errors in code that worked previously. This explicitly specifies a device in one place, and may resolve https://github.com/explosion/spaCy/issues/11734. For another example of this issue, see https://github.com/pytorch/pytorch/issues/85450.

    The core problem is that a CPU tensor is being indexed using a non-CPU tensor.

    Leaving as a draft in case this doesn't resolve this issue, which might happen if we're doing the same thing somewhere else.

    bug 
    opened by polm 1
  • Add test step before PyTorch is installed

    Add test step before PyTorch is installed

    spacy-experimental is supposed to be safe to load without PyTorch, even if large parts of it aren't functional, but that wasn't checked in tests. This adds a check for that by simply running the tests before installing PyTorch and again afterwards.

    Given the current state of master, which doesn't have #23, this should fail.

    opened by polm 1
  • `Coref`: Optimize `SpanResolver.set_annotations`

    `Coref`: Optimize `SpanResolver.set_annotations`

    Coerce scalar tensors to native Python integers to avoid comparison overhead.

    With the above change (and another optimization to SpanGroups.copy; PR), we see a 90% reduction in execution time of the set_annotation pipeline phase.

    Before

    Screenshot_20220825_154448

    After

    Screenshot_20220825_154626

    opened by shadeMe 0
  • `Coref`: Optimize `create_gold_scores`

    `Coref`: Optimize `create_gold_scores`

    Copy the entire mentions array to the CPU, create tuple keys on-demand, pre-allocate output matrix as Float2d. Also remove unnecessary casts in CoreferenceResolver.update.

    Before

    Screenshot_20220824_153937

    After

    Screenshot_20220824_154002

    opened by shadeMe 0
  • MultiEmbed

    MultiEmbed

    Embedding component that is the deterministic version of MultiHashEmbed i.e.: each token gets mapped to an index unless they are not in the vocabulary in which case they get mapped to a learned unknown vector.

    The mechanism to initialize MultiEmbed is a bit strange. The Model gets created first with dummy Embed layers. Then when init gets called MultiEmbed expects the model.attrs["tables"] to be already set, which provides the mapping from token attributes to indices. During initialization the dummy Embed layers get replaced by ones that adjust their sizes to the number of symbols in the tables.

    A helper callback is provided in set_attr.py that should be placed in the initialize.before_init section in the config. It can be used to set the tables for MultiEmbed.

    Currently the token_map.py is a script that has the structure of the usual spacy init scrips.

    opened by kadarakos 0
  • Support lazy, recursive sentence splitting

    Support lazy, recursive sentence splitting

    We use sentence splitting in the biaffine parser to keep the O(n^2) biaffine attention model tractable. However, since the sentence splitter makes errors, the parser may not have the correct head available.

    This change adds another splitting strategy as the preferred splitting. The goal of this strategy is to split up a Doc into pieces that are as large as possible given a maximum n_max. This reduces the number of attachment errors as a result of incorrect sentence splits, while providing an upper bound on complexity (O(n_max^2)).

    The algorithm works as follows:

    • If the length |d| > max_length:
      • Find the highest-probability split in d according to senter.
      • Split d into d_1 and d_2 using the highest probability split.
      • Recursively apply this algorithm to d_1 and d_2.
    • Otherwise: do nothing

    Note: draft, requires functionality from PR https://github.com/explosion/spaCy/pull/11002, which targets spaCy v4.

    opened by danieldk 0
Releases(v0.6.1)
  • v0.6.1(Nov 4, 2022)

    • Coref: Docs of one or fewer tokens resulted in an error (#28).
    • Coref: resolved spans could not include the final token in a Doc (#27).
    • Biaffine parser: place tensors on the right device (#25).

    This release includes an updated trained pipeline for demonstration purposes. You can install it like this:

    pip install https://github.com/explosion/spacy-experimental/releases/download/v0.6.1/en_coreference_web_trf-3.4.0a2-py3-none-any.whl
    

    Downloads (wheel)

    Source code(tar.gz)
    Source code(zip)
    en_coreference_web_trf-3.4.0a2-py3-none-any.whl(467.55 MB)
  • v0.6.0(Sep 28, 2022)

    • new coreference components (#17)

    ~~This release includes an experimental English coref pipeline. You can install the pipeline by downloading it from the assets in this release page, or install it directly with the following command:~~

    Update 2022-11-07: Some issues in the coref implementation have been fixed in the v0.6.1 release of this package. While the pipeline below can still be installed and will be left up for posterity, note that it should only be used with 0.6.0, but by default will pull in the newer version. If you want to use the below package (which is not recommended), be sure to pip install spacy-experimental==0.6.0.

    pip install https://github.com/explosion/spacy-experimental/releases/download/v0.6.0/en_coreference_web_trf-3.4.0a0-py3-none-any.whl
    

    For further information about the coref components, see the example project or the API documentation. We'll also be providing more detailed explanations in an upcoming blog post, video, and elsewhere.

    Downloads (wheel)

    Source code(tar.gz)
    Source code(zip)
    en_coreference_web_trf-3.4.0a0-py3-none-any.whl(467.57 MB)
  • v0.5.0(Jun 10, 2022)

    • removed edit tree lemmatizer (#12, it's in core now)
    • biaffine parser updates (#9, #13)
    • add experimental Span Suggesters exploiting parser/tagger/sentence information (#11)
    • add SpanFinder: a new experimental component for learning span boundaries (#10)
    Source code(tar.gz)
    Source code(zip)
  • v0.4.0(Mar 18, 2022)

Owner
Explosion
A software company specializing in developer tools for Artificial Intelligence and Natural Language Processing
Explosion
AI-Broad-casting - AI Broad casting with python

Basic Code 1. Use The Code Configuration Environment conda create -n code_base p

Examples of using sparse attention, as in "Generating Long Sequences with Sparse Transformers"

Status: Archive (code is provided as-is, no updates expected) Update August 2020: For an example repository that achieves state-of-the-art modeling pe

OpenAI 1.3k Dec 28, 2022
iBOT: Image BERT Pre-Training with Online Tokenizer

Image BERT Pre-Training with iBOT Official PyTorch implementation and pretrained models for paper iBOT: Image BERT Pre-Training with Online Tokenizer.

Bytedance Inc. 435 Jan 06, 2023
🤗🖼️ HuggingPics: Fine-tune Vision Transformers for anything using images found on the web.

🤗 🖼️ HuggingPics Fine-tune Vision Transformers for anything using images found on the web. Check out the video below for a walkthrough of this proje

Nathan Raw 185 Dec 21, 2022
A framework for training and evaluating AI models on a variety of openly available dialogue datasets.

ParlAI (pronounced “par-lay”) is a python framework for sharing, training and testing dialogue models, from open-domain chitchat, to task-oriented dia

Facebook Research 9.7k Jan 09, 2023
This is a project of data parallel that running on NLP tasks.

This is a project of data parallel that running on NLP tasks.

2 Dec 12, 2021
Entity Disambiguation as text extraction (ACL 2022)

ExtEnD: Extractive Entity Disambiguation This repository contains the code of ExtEnD: Extractive Entity Disambiguation, a novel approach to Entity Dis

Sapienza NLP group 121 Jan 03, 2023
Fine-tuning scripts for evaluating transformer-based models on KLEJ benchmark.

The KLEJ Benchmark Baselines The KLEJ benchmark (Kompleksowa Lista Ewaluacji Językowych) is a set of nine evaluation tasks for the Polish language und

Allegro Tech 17 Oct 18, 2022
A Multilingual Latent Dirichlet Allocation (LDA) Pipeline with Stop Words Removal, n-gram features, and Inverse Stemming, in Python.

Multilingual Latent Dirichlet Allocation (LDA) Pipeline This project is for text clustering using the Latent Dirichlet Allocation (LDA) algorithm. It

Artifici Online Services inc. 74 Oct 07, 2022
A fast Text-to-Speech (TTS) model. Work well for English, Mandarin/Chinese, Japanese, Korean, Russian and Tibetan (so far). 快速语音合成模型,适用于英语、普通话/中文、日语、韩语、俄语和藏语(当前已测试)。

简体中文 | English 并行语音合成 [TOC] 新进展 2021/04/20 合并 wavegan 分支到 main 主分支,删除 wavegan 分支! 2021/04/13 创建 encoder 分支用于开发语音风格迁移模块! 2021/04/13 softdtw 分支 支持使用 Sof

Atomicoo 161 Dec 19, 2022
Code for the paper "VisualBERT: A Simple and Performant Baseline for Vision and Language"

This repository contains code for the following two papers: VisualBERT: A Simple and Performant Baseline for Vision and Language (arxiv) with a short

Natural Language Processing @UCLA 464 Jan 04, 2023
Japanese NLP Library

Japanese NLP Library Back to Home Contents 1 Requirements 1.1 Links 1.2 Install 1.3 History 2 Libraries and Modules 2.1 Tokenize jTokenize.py 2.2 Cabo

Pulkit Kathuria 144 Dec 27, 2022
Nateve compiler developed with python.

Adam Adam is a Nateve Programming Language compiler developed using Python. Nateve Nateve is a new general domain programming language open source ins

Nateve 7 Jan 15, 2022
Must-read papers on improving efficiency for pre-trained language models.

Must-read papers on improving efficiency for pre-trained language models.

Tobias Lee 89 Jan 03, 2023
Yet Another Compiler Visualizer

yacv: Yet Another Compiler Visualizer yacv is a tool for visualizing various aspects of typical LL(1) and LR parsers. Check out demo on YouTube to see

Ashutosh Sathe 129 Dec 17, 2022
💛 Code and Dataset for our EMNLP 2021 paper: "Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes"

Perspective-taking and Pragmatics for Generating Empathetic Responses Focused on Emotion Causes Official PyTorch implementation and EmoCause evaluatio

Hyunwoo Kim 50 Dec 21, 2022
Club chatbot

Chatbot Club chatbot Instructions to get the Chatterbot working Step 1. First make sure you are using a version of Python 3 or newer. To check your ve

5 Mar 07, 2022
Data loaders and abstractions for text and NLP

torchtext This repository consists of: torchtext.data: Generic data loaders, abstractions, and iterators for text (including vocabulary and word vecto

3.2k Dec 30, 2022
Large-scale Knowledge Graph Construction with Prompting

Large-scale Knowledge Graph Construction with Prompting across tasks (predictive and generative), and modalities (language, image, vision + language, etc.)

ZJUNLP 161 Dec 28, 2022
Perform sentiment analysis on textual data that people generally post on websites like social networks and movie review sites.

Sentiment Analyzer The goal of this project is to perform sentiment analysis on textual data that people generally post on websites like social networ

Madhusudan.C.S 53 Mar 01, 2022