💥 Fast State-of-the-Art Tokenizers optimized for Research and Production

Overview



Build GitHub

Provides an implementation of today's most used tokenizers, with a focus on performance and versatility.

Main features:

  • Train new vocabularies and tokenize, using today's most used tokenizers.
  • Extremely fast (both training and tokenization), thanks to the Rust implementation. Takes less than 20 seconds to tokenize a GB of text on a server's CPU.
  • Easy to use, but also extremely versatile.
  • Designed for research and production.
  • Normalization comes with alignments tracking. It's always possible to get the part of the original sentence that corresponds to a given token.
  • Does all the pre-processing: Truncate, Pad, add the special tokens your model needs.

Bindings

We provide bindings to the following languages (more to come!):

Quick example using Python:

Choose your model between Byte-Pair Encoding, WordPiece or Unigram and instantiate a tokenizer:

from tokenizers import Tokenizer
from tokenizers.models import BPE

tokenizer = Tokenizer(BPE())

You can customize how pre-tokenization (e.g., splitting into words) is done:

from tokenizers.pre_tokenizers import Whitespace

tokenizer.pre_tokenizer = Whitespace()

Then training your tokenizer on a set of files just takes two lines of codes:

from tokenizers.trainers import BpeTrainer

trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
tokenizer.train(files=["wiki.train.raw", "wiki.valid.raw", "wiki.test.raw"], trainer=trainer)

Once your tokenizer is trained, encode any text with just one line:

output = tokenizer.encode("Hello, y'all! How are you 😁 ?")
print(output.tokens)
# ["Hello", ",", "y", "'", "all", "!", "How", "are", "you", "[UNK]", "?"]

Check the python documentation or the python quicktour to learn more!

You might also like...
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.
🤗 Transformers: State-of-the-art Natural Language Processing for Pytorch, TensorFlow, and JAX.

English | 简体中文 | 繁體中文 State-of-the-art Natural Language Processing for Jax, PyTorch and TensorFlow 🤗 Transformers provides thousands of pretrained mo

Learn meanings behind words is a key element in NLP. This project concentrates on the disambiguation of preposition senses. Therefore, we train a bert-transformer model and surpass the state-of-the-art.

New State-of-the-Art in Preposition Sense Disambiguation Supervisor: Prof. Dr. Alexander Mehler Alexander Henlein Institutions: Goethe University TTLa

A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks
A model library for exploring state-of-the-art deep learning topologies and techniques for optimizing Natural Language Processing neural networks

A Deep Learning NLP/NLU library by Intel® AI Lab Overview | Models | Installation | Examples | Documentation | Tutorials | Contributing NLP Architect

Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5
Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5

NLP-Summarizer Natural language processing summarizer using 3 state of the art Transformer models: BERT, GPT2, and T5 This project aimed to provide in

Easy to use, state-of-the-art Neural Machine Translation for 100+ languages

EasyNMT - Easy to use, state-of-the-art Neural Machine Translation This package provides easy to use, state-of-the-art machine translation for more th

A very simple framework for state-of-the-art Natural Language Processing (NLP)

A very simple framework for state-of-the-art NLP. Developed by Humboldt University of Berlin and friends. IMPORTANT: (30.08.2020) We moved our models

State of the Art Natural Language Processing

Spark NLP: State of the Art Natural Language Processing Spark NLP is a Natural Language Processing library built on top of Apache Spark ML. It provide

A very simple framework for state-of-the-art Natural Language Processing (NLP)

A very simple framework for state-of-the-art NLP. Developed by Humboldt University of Berlin and friends. IMPORTANT: (30.08.2020) We moved our models

State of the Art Natural Language Processing

Spark NLP: State of the Art Natural Language Processing Spark NLP is a Natural Language Processing library built on top of Apache Spark ML. It provide

Comments
  • Can't import any modules

    Can't import any modules

    What is says on the tin. Every module I try importing into a script is spitting out a "module not found" rror.

    Traceback (most recent call last): File "ab2.py", line 3, in from tokenizers.tools import BertWordPieceTokenizer ImportError: cannot import name 'BertWordPieceTokenizer' from 'tokenizers.tools' (/home/../anaconda3/envs/tokenizers/lib/python3.7/site-packages/tokenizers/tools/init.py)

    Traceback (most recent call last): File "ab2.py", line 3, in from transformers import BertWordPieceTokenizer ImportError: cannot import name 'BertWordPieceTokenizer' from 'transformers' (/home/../anaconda3/envs/tokenizers/lib/python3.7/site-packages/transformers/init.py)

    I've tried:

    import BertWordPieceTokenizer from tokenizers.toold import AutoTokenizer from tokenizers import BartTokenizer

    To Illustrate a few examples.

    I've installed Tokenizers in an anaconda3 venv via pip, via conda forge, and compiled from source.

    I've tried installing Transformers as well and get the same errors. I've tried installing Tokenizers and then installing Transformers and got the same errors.

    I've tried installing Transformers and then Tokenizers and gotten the same error.

    I've looked through the Tokenizers code and unless I'm missing something (entirely possible) autotokenize isn't even a part of the package? I'll admit I'm not a very experienced programmer but I'll be damned if I can find it.

    Help would be appreciated.

    System specs are:

    Linux mint 21.1 RTX2080 ti i78700k

    cudnn 8.1.1 cuda 11.2.0 Tensor rt 7.2.3 Python 3.7 (by the way, figuring out what was needed here, finding the files, and actually installing them was beyond arduous. There has to be a better way. It's the only way I could get anything at all to work though).

    opened by kronkinatorix 1
  • How to decode with the existing tokenizer

    How to decode with the existing tokenizer

    I train the tokenizer following the tutorial of the huggingface:

    from tokenizers import Tokenizer
    from tokenizers.models import BPE
    from tokenizers.trainers import BpeTrainer
    from tokenizers.pre_tokenizers import Whitespace
    
    tokenizer = Tokenizer(BPE(unk_token="[UNK]"))
    trainer = BpeTrainer(special_tokens=["[UNK]", "[CLS]", "[SEP]", "[PAD]", "[MASK]"])
    tokenizer.pre_tokenizer = Whitespace()
    files = [f"wikitext-103-raw/wiki.{split}.raw" for split in ["test", "train", "valid"]]
    tokenizer.train(files, trainer)
    tokenizer.save("tokenizer-wiki.json")
    

    But I don't know how to use the existing tokenizer for decoding:

    tokenizer = Tokenizer.from_file("tokenizer-wiki.json")
    o=tokenizer.encode("sd jk sds  sds")
    tokenizer.decode(o.ids)
    # s d j k s ds s ds
    

    I know we can recover the output with the o.offsets, but what if we do not know the offsets, like we are decoding from a language model or NMT.

    opened by ZhiYuanZeng 4
  • Is there any support for 'google/tapas-mini-finetuned-wtq' tokenizer?

    Is there any support for 'google/tapas-mini-finetuned-wtq' tokenizer?

    I'm trying to run a tokenizer in java then eventually compile it to run on android for an open domain question and answer project. I'm wondering why 'google/tapas-mini-finetuned-wtq' doesn't work with DeepJavaLibrary. For more popular models the tokenizer is working. I'm assuming there is no fast tokenizer for tapas, so i was wondering if anyone had any advice on how to go about running tapas tokenizer and model on android/java?

    opened by memetrusidovski 4
  • OpenSSL internal error when importing tokenizers module

    OpenSSL internal error when importing tokenizers module

    When importing tokenizers 0.13.2 or 0.13.1 in a Fips mode enabled environment with Red Hat Enterprise Linux 8.6 (Ootpa) we see this error:

    sh-4.4# python3 -c "import tokenizers"
    fips.c(145): OpenSSL internal error, assertion failed: FATAL FIPS SELFTEST FAILURE
    Aborted (core dumped)
    

    Additional info:

    No errors when using tokenizers==0.13.0 or tokenizers==0.11.4
    Python 3.8.13
    OpenSSL 1.1.1g FIPS  21 Apr 2020 or OpenSSL 1.1.1k  FIPS 25 Mar 2021
    
    opened by wai25 3
Releases(v0.13.2)
  • v0.13.2(Nov 7, 2022)

  • python-v0.13.2(Nov 7, 2022)

  • node-v0.13.2(Nov 7, 2022)

  • v0.13.1(Oct 6, 2022)

  • python-v0.13.1(Oct 6, 2022)

  • node-v0.13.1(Oct 6, 2022)

  • python-v0.13.0(Sep 21, 2022)

    [0.13.0]

    • [#956] PyO3 version upgrade
    • [#1055] M1 automated builds
    • [#1008] Decoder is now a composable trait, but without being backward incompatible
    • [#1047, #1051, #1052] Processor is now a composable trait, but without being backward incompatible

    Both trait changes warrant a "major" number since, despite best efforts to not break backward compatibility, the code is different enough that we cannot be exactly sure.

    Source code(tar.gz)
    Source code(zip)
  • v0.13.0(Sep 19, 2022)

    [0.13.0]

    • [#1009] unstable_wasm feature to support building on Wasm (it's unstable !)
    • [#1008] Decoder is now a composable trait, but without being backward incompatible
    • [#1047, #1051, #1052] Processor is now a composable trait, but without being backward incompatible

    Both trait changes warrant a "major" number since, despite best efforts to not break backward compatibility, the code is different enough that we cannot be exactly sure.

    Source code(tar.gz)
    Source code(zip)
  • node-v0.13.0(Sep 19, 2022)

    [0.13.0]

    • [#1008] Decoder is now a composable trait, but without being backward incompatible
    • [#1047, #1051, #1052] Processor is now a composable trait, but without being backward incompatible
    Source code(tar.gz)
    Source code(zip)
  • python-v0.12.1(Apr 13, 2022)

  • v0.12.0(Mar 31, 2022)

    [0.12.0]

    Bump minor version because of a breaking change.

    The breaking change was causing more issues upstream in transformers than anticipated: https://github.com/huggingface/transformers/pull/16537#issuecomment-1085682657

    The decision was to rollback on that breaking change, and figure out a different way later to do this modification

    • [#938] Breaking change. Decoder trait is modified to be composable. This is only breaking if you are using decoders on their own. tokenizers should be error free.

    • [#939] Making the regex in ByteLevel pre_tokenizer optional (necessary for BigScience)

    • [#952] Fixed the vocabulary size of UnigramTrainer output (to respect added tokens)

    • [#954] Fixed not being able to save vocabularies with holes in vocab (ConvBert). Yell warnings instead, but stop panicking.

    • [#961] Added link for Ruby port of tokenizers

    • [#960] Feature gate for cli and its clap dependency

    Source code(tar.gz)
    Source code(zip)
  • python-v0.12.0(Mar 31, 2022)

    [0.12.0]

    The breaking change was causing more issues upstream in transformers than anticipated: https://github.com/huggingface/transformers/pull/16537#issuecomment-1085682657

    The decision was to rollback on that breaking change, and figure out a different way later to do this modification

    Bump minor version because of a breaking change.

    • [#938] Breaking change. Decoder trait is modified to be composable. This is only breaking if you are using decoders on their own. tokenizers should be error free.

    • [#939] Making the regex in ByteLevel pre_tokenizer optional (necessary for BigScience)

    • [#952] Fixed the vocabulary size of UnigramTrainer output (to respect added tokens)

    • [#954] Fixed not being able to save vocabularies with holes in vocab (ConvBert). Yell warnings instead, but stop panicking.

    • [#962] Fix tests for python 3.10

    • [#961] Added link for Ruby port of tokenizers

    Source code(tar.gz)
    Source code(zip)
  • node-v0.12.0(Mar 31, 2022)

    [0.12.0]

    The breaking change was causing more issues upstream in transformers than anticipated: https://github.com/huggingface/transformers/pull/16537#issuecomment-1085682657

    The decision was to rollback on that breaking change, and figure out a different way later to do this modification

    Bump minor version because of a breaking change. Using 0.12 to match other bindings.

    • [#938] Breaking change. Decoder trait is modified to be composable. This is only breaking if you are using decoders on their own. tokenizers should be error free.

    • [#939] Making the regex in ByteLevel pre_tokenizer optional (necessary for BigScience)

    • [#952] Fixed the vocabulary size of UnigramTrainer output (to respect added tokens)

    • [#954] Fixed not being able to save vocabularies with holes in vocab (ConvBert). Yell warnings instead, but stop panicking.

    • [#961] Added link for Ruby port of tokenizers

    Source code(tar.gz)
    Source code(zip)
  • v0.11.2(Feb 28, 2022)

  • python-v0.11.6(Feb 28, 2022)

  • node-v0.8.3(Feb 28, 2022)

  • python-v0.11.5(Feb 16, 2022)

  • v0.11.1(Jan 17, 2022)

    • [#882] Fixing Punctuation deserialize without argument.
    • [#868] Fixing missing direction in TruncationParams
    • [#860] Adding TruncationSide to TruncationParams
    Source code(tar.gz)
    Source code(zip)
  • python-v0.11.3(Jan 17, 2022)

    • [#882] Fixing Punctuation deserialize without argument.
    • [#868] Fixing missing direction in TruncationParams
    • [#860] Adding TruncationSide to TruncationParams
    Source code(tar.gz)
    Source code(zip)
  • node-v0.8.2(Jan 17, 2022)

  • node-v0.8.1(Jan 17, 2022)

  • python-v0.11.4(Jan 17, 2022)

  • python-v0.11.2(Jan 4, 2022)

  • python-v0.11.1(Dec 28, 2021)

  • python-v0.11.0(Dec 24, 2021)

    Fixed

    • [#585] Conda version should now work on old CentOS
    • [#844] Fixing interaction between is_pretokenized and trim_offsets.
    • [#851] Doc links

    Added

    • [#657]: Add SplitDelimiterBehavior customization to Punctuation constructor
    • [#845]: Documentation for Decoders.

    Changed

    • [#850]: Added a feature gate to enable disabling http features
    • [#718]: Fix WordLevel tokenizer determinism during training
    • [#762]: Add a way to specify the unknown token in SentencePieceUnigramTokenizer
    • [#770]: Improved documentation for UnigramTrainer
    • [#780]: Add Tokenizer.from_pretrained to load tokenizers from the Hugging Face Hub
    • [#793]: Saving a pretty JSON file by default when saving a tokenizer
    Source code(tar.gz)
    Source code(zip)
  • node-v0.8.0(Sep 2, 2021)

    BREACKING CHANGES

    • Many improvements on the Trainer (#519). The files must now be provided first when calling tokenizer.train(files, trainer).

    Features

    • Adding the TemplateProcessing
    • Add WordLevel and Unigram models (#490)
    • Add nmtNormalizer and precompiledNormalizer normalizers (#490)
    • Add templateProcessing post-processor (#490)
    • Add digitsPreTokenizer pre-tokenizer (#490)
    • Add support for mapping to sequences (#506)
    • Add splitPreTokenizer pre-tokenizer (#542)
    • Add behavior option to the punctuationPreTokenizer (#657)
    • Add the ability to load tokenizers from the Hugging Face Hub using fromPretrained (#780)

    Fixes

    • Fix a bug where long tokenizer.json files would be incorrectly deserialized (#459)
    • Fix RobertaProcessing deserialization in PostProcessorWrapper (#464)
    Source code(tar.gz)
    Source code(zip)
  • python-v0.10.3(May 24, 2021)

    Fixed

    • [#686]: Fix SPM conversion process for whitespace deduplication
    • [#707]: Fix stripping strings containing Unicode characters

    Added

    • [#693]: Add a CTC Decoder for Wave2Vec models

    Removed

    • [#714]: Removed support for Python 3.5
    Source code(tar.gz)
    Source code(zip)
  • python-v0.10.2(Apr 5, 2021)

    Fixed

    • [#652]: Fix offsets for Precompiled corner case
    • [#656]: Fix BPE continuing_subword_prefix
    • [#674]: Fix Metaspace serialization problems
    Source code(tar.gz)
    Source code(zip)
  • python-v0.10.1(Feb 4, 2021)

    Fixed

    • [#616]: Fix SentencePiece tokenizers conversion
    • [#617]: Fix offsets produced by Precompiled Normalizer (used by tokenizers converted from SPM)
    • [#618]: Fix Normalizer.normalize with PyNormalizedStringRefMut
    • [#620]: Fix serialization/deserialization for overlapping models
    • [#621]: Fix ByteLevel instantiation from a previously saved state (using __getstate__())
    Source code(tar.gz)
    Source code(zip)
  • python-v0.10.0(Jan 12, 2021)

    Added

    • [#508]: Add a Visualizer for notebooks to help understand how the tokenizers work
    • [#519]: Add a WordLevelTrainer used to train a WordLevel model
    • [#533]: Add support for conda builds
    • [#542]: Add Split pre-tokenizer to easily split using a pattern
    • [#544]: Ability to train from memory. This also improves the integration with datasets
    • [#590]: Add getters/setters for components on BaseTokenizer
    • [#574]: Add fust_unk option to SentencePieceBPETokenizer

    Changed

    • [#509]: Automatically stubbing the .pyi files
    • [#519]: Each Model can return its associated Trainer with get_trainer()
    • [#530]: The various attributes on each component can be get/set (ie. tokenizer.model.dropout = 0.1)
    • [#538]: The API Reference has been improved and is now up-to-date.

    Fixed

    • [#519]: During training, the Model is now trained in-place. This fixes several bugs that were forcing to reload the Model after a training.
    • [#539]: Fix BaseTokenizer enable_truncation docstring
    Source code(tar.gz)
    Source code(zip)
Owner
Hugging Face
Solving NLP, one commit at a time!
Hugging Face
A method to generate speech across multiple speakers

VoiceLoop PyTorch implementation of the method described in the paper VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop. VoiceLoop is a n

Facebook Archive 873 Dec 15, 2022
Pipeline for chemical image-to-text competition

BMS-Molecular-Translation Introduction This is a pipeline for Bristol-Myers Squibb – Molecular Translation by Vadim Timakin and Maksim Zhdanov. We got

Maksim Zhdanov 7 Sep 20, 2022
This code is the implementation of Text Emotion Recognition (TER) with linguistic features

APSIPA-TER This code is the implementation of Text Emotion Recognition (TER) with linguistic features. The network model is BERT with a pretrained mod

kenro515 1 Feb 08, 2022
This repo contains simple to use, pretrained/training-less models for speaker diarization.

PyDiar This repo contains simple to use, pretrained/training-less models for speaker diarization. Supported Models Binary Key Speaker Modeling Based o

12 Jan 20, 2022
Code for the paper "Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer"

T5: Text-To-Text Transfer Transformer The t5 library serves primarily as code for reproducing the experiments in Exploring the Limits of Transfer Lear

Google Research 4.6k Jan 01, 2023
🛸 Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy

spacy-transformers: Use pretrained transformers like BERT, XLNet and GPT-2 in spaCy This package provides spaCy components and architectures to use tr

Explosion 1.2k Jan 08, 2023
The swas programming language

The Swas programming language This is a language that was made for fun. Installation Step 0: Make sure you have python installed Step 1. Clone this re

Swas.py 19 Jul 18, 2022
使用pytorch+transformers复现了SimCSE论文中的有监督训练和无监督训练方法

SimCSE复现 项目描述 SimCSE是一种简单但是很巧妙的NLP对比学习方法,创新性地引入Dropout的方式,对样本添加噪声,从而达到对正样本增强的目的。 该框架的训练目的为:对于batch中的每个样本,拉近其与正样本之间的距离,拉远其与负样本之间的距离,使得模型能够在大规模无监督语料(也可以

58 Dec 20, 2022
Blackstone is a spaCy model and library for processing long-form, unstructured legal text

Blackstone Blackstone is a spaCy model and library for processing long-form, unstructured legal text. Blackstone is an experimental research project f

ICLR&D 579 Jan 08, 2023
多语言降噪预训练模型MBart的中文生成任务

mbart-chinese 基于mbart-large-cc25 的中文生成任务 Input source input: text + /s + lang_code target input: lang_code + text + /s Usage token_ids_mapping.jso

11 Sep 19, 2022
Blazing fast language detection using fastText model

Luga A blazing fast language detection using fastText's language models Luga is a Swahili word for language. fastText provides a blazing fast language

Prayson Wilfred Daniel 18 Dec 20, 2022
Codename generator using WordNet parts of speech database

codenames Codename generator using WordNet parts of speech database References: https://possiblywrong.wordpress.com/2021/09/13/code-name-generator/ ht

possiblywrong 27 Oct 30, 2022
Rethinking the Truly Unsupervised Image-to-Image Translation - Official PyTorch Implementation (ICCV 2021)

Rethinking the Truly Unsupervised Image-to-Image Translation (ICCV 2021) Each image is generated with the source image in the left and the average sty

Clova AI Research 436 Dec 27, 2022
Implemented shortest-circuit disambiguation, maximum probability disambiguation, HMM-based lexical annotation and BiLSTM+CRF-based named entity recognition

Implemented shortest-circuit disambiguation, maximum probability disambiguation, HMM-based lexical annotation and BiLSTM+CRF-based named entity recognition

0 Feb 13, 2022
Python utility library for compositing PDF documents with reportlab.

pdfdoc-py Python utility library for compositing PDF documents with reportlab. Installation The pdfdoc-py package can be installed directly from the s

Michael Gale 1 Jan 06, 2022
AI and Machine Learning workflows on Anthos Bare Metal.

Hybrid and Sovereign AI on Anthos Bare Metal Table of Contents Overview Terraform as IaC Substrate ABM Cluster on GCE using Terraform TensorFlow ResNe

Google Cloud Platform 8 Nov 26, 2022
TLA - Twitter Linguistic Analysis

TLA - Twitter Linguistic Analysis Tool for linguistic analysis of communities TLA is built using PyTorch, Transformers and several other State-of-the-

Tushar Sarkar 47 Aug 14, 2022
Sentiment Classification using WSD, Maximum Entropy & Naive Bayes Classifiers

Sentiment Classification using WSD, Maximum Entropy & Naive Bayes Classifiers

Pulkit Kathuria 173 Jan 04, 2023
Count the frequency of letters or words in a text file and show a graph.

Word Counter By EBUS Coding Club Count the frequency of letters or words in a text file and show a graph. Requirements Python 3.9 or higher matplotlib

EBUS Coding Club 0 Apr 09, 2022
Scene Text Retrieval via Joint Text Detection and Similarity Learning

This is the code of "Scene Text Retrieval via Joint Text Detection and Similarity Learning". For more details, please refer to our CVPR2021 paper.

79 Nov 29, 2022