fastai ulmfit - Pretraining the Language Model, Fine-Tuning and training a Classifier

Overview

fast.ai ULMFiT with SentencePiece from pretraining to deployment

Motivation: Why even bother with a non-BERT / Transformer language model? Short answer: you can train a state of the art text classifier with ULMFiT with limited data and affordable hardware. The whole process (preparing the Wikipedia dump, pretrain the language model, fine tune the language model and training the classifier) takes about 5 hours on my workstation with a RTX 3090. The training of the model with FP16 requires less than 8 GB VRAM - so you can train the model on affordable GPUs.

I also saw this paper on the roadmap for fast.ai 2.3 Single Headed Attention RNN: Stop Thinking With Your Head which could improve the performance further.

This Repo is based on:

Pretrained models

Language (local) code Perplexity Vocab Size Tokenizer Download (.tgz files)
German Deutsch de 16.1 15k SP https://bit.ly/ulmfit-dewiki
German Deutsch de 18.5 30k SP https://bit.ly/ulmfit-dewiki-30k
Dutch Nederlands nl 20.5 15k SP https://bit.ly/ulmfit-nlwiki
Russian Русский ru 29.8 15k SP https://bit.ly/ulmfit-ruwiki
Portuguese Português pt 17.3 15k SP https://bit.ly/ulmfit-ptwiki
Vietnamese Tiếng Việt vi 18.8 15k SP https://bit.ly/ulmfit-viwiki
Japanese 日本語 ja 42.6 15k SP https://bit.ly/ulmfit-jawiki
Italian Italiano it 23.7 15k SP https://bit.ly/ulmfit-itwiki
Spanish Español es 21.9 15k SP https://bit.ly/ulmfit-eswiki
Korean 한국어 ko 39.6 15k SP https://bit.ly/ulmfit-kowiki
Thai ไทย th 56.4 15k SP https://bit.ly/ulmfit-thwiki
Hebrew עברית he 46.3 15k SP https://bit.ly/ulmfit-hewiki
Arabic العربية ar 50.0 15k SP https://bit.ly/ulmfit-arwiki
Mongolian Монгол mn see: Github: RobertRitz

Download with wget

# to preserve the filenames (.tgz!) when downloading with wget use --content-disposition
wget --content-disposition https://bit.ly/ulmfit-dewiki 

Usage of pretrained models - library fastai_ulmfit.pretrained

I've written a small library around this repo, to easily use the pretrained models. You don't have to bother with model, vocab and tokenizer files and paths - the following functions will take care of that.

Tutorial: fastai_ulmfit_pretrained_usage.ipynb Open In Colab

Installation

pip install fastai-ulmfit

Usage

# import
from fastai_ulmfit.pretrained import *

url = 'http://bit.ly/ulmfit-dewiki'

# get tokenizer - if pretrained=True, the SentencePiece Model used for language model pretraining will be used. Default: False 
tok = tokenizer_from_pretrained(url, pretrained=False)

# get language model learner for fine-tuning
learn = language_model_from_pretrained(dls, url=url, drop_mult=0.5).to_fp16()

# save fine-tuned model for classification
path = learn.save_lm('tmp/test_lm')

# get text classifier learner from fine-tuned model
learn = text_classifier_from_lm(dls, path=path, metrics=[accuracy]).to_fp16()

Extract Sentence Embeddings

from fastai_ulmfit.embeddings import SentenceEmbeddingCallback

se = SentenceEmbeddingCallback(pool_mode='concat')
_ = learn.get_preds(cbs=[se])

feat = se.feat
pca = PCA(n_components=2)
pca.fit(feat['vec'])
coords = pca.transform(feat['vec'])

Model pretraining

Setup

Python environment

fastai-2.2.7
fastcore-1.3.19
sentencepiece-0.1.95
fastinference-0.0.36

Install packages pip install -r requirements.txt

The trained language models are compatible with other fastai versions!

Docker

The Wikipedia-dump preprocessing requires docker https://docs.docker.com/get-docker/.

Project structure

.
├── we                         Docker image for the preperation of the Wikipedia-dump / wikiextractor
└── data          
    └── {language-code}wiki         
        ├── dump                    downloaded Wikipedia dump
        │   └── extract             extracted wikipedia-articles using wikiextractor
        ├── docs 
        │   ├── all                 all extracted Wikipedia articles as single txt-files
        │   ├── sampled             sampled Wikipedia articles for language model pretraining
        │   └── sampled_tok         cached tokenized sampled articles - created by fastai / sentencepiece
        └── model 
            ├── lm                  language model trained in step 2
            │   ├── fwd             forward model
            │   ├── bwd             backwards model
            │   └── spm             SentencePiece model
            │
            ├── ft                  fine tuned model trained in step 3
            │   ├── fwd             forward model
            │   ├── bwd             backwards model
            │   └── spm             SentencePiece model
            │
            └── class               classifier trained in step 4
                ├── fwd             forward learner
                └── bwd             backwards learner

1. Prepare Wikipedia-dump for pretraining

ULMFiT can be peretrained on relativly small datasets - 100 million tokens are sufficient to get state-of-the art classification results (compared to Transformer models as BERT, which need huge amounts of training data). The easiest way is to pretrain a language model on Wikipedia.

The code for the preperation steps is heavily inspired by / copied from the fast.ai NLP-course: https://github.com/fastai/course-nlp/blob/master/nlputils.py

I built a docker container and script, that automates the following steps:

  1. Download Wikipedia XML-dump
  2. Extract the text from the dump
  3. Sample 160.000 documents with a minimum length of 1800 characters (results in 100m-120m tokens) both parameters can be changed - see the usage below

The whole process will take some time depending on the download speed and your hardware. For the 'dewiki' the preperation took about 45 min.

Run the following commands in the current directory

# build the wikiextractor docker file
docker build -t wikiextractor ./we

# run the docker container for a specific language
# docker run -v $(pwd)/data:/data -it wikiextractor -l <language-code> 
# for German language-code de run:
docker run -v $(pwd)/data:/data -it wikiextractor -l de
...
sucessfully prepared dewiki - /data/dewiki/docs/sampled, number of docs 160000/160000 with 110699119 words / tokens!

# To change the number of sampled documents or the minimum length see
usage: preprocess.py [-h] -l LANG [-n NUMBER_DOCS] [-m MIN_DOC_LENGTH] [--mirror MIRROR] [--cleanup]

# To cleanup indermediate files (wikiextractor and all splitted documents) run the following command. 
# The Wikipedia-XML-Dump and the sampled docs will not be deleted!
docker run -v $(pwd)/data:/data -it wikiextractor -l <language-code> --cleanup

2. Language model pretraining on Wikipedia Dump

Notebook: 2_ulmfit_lm_pretraining.ipynb

To get the best result, you can train two seperate language models - a forward and a backward model. You'll have to run the complete notebook twice and set the backwards parameter accordingly. The models will be saved in seperate folders (fwd / bwd). The same applies to fine-tuning and training of the classifier.

Parameters

Change the following parameters according to your needs:

lang = 'de' # language of the Wikipedia-Dump
backwards = False # Train backwards model? Default: False for forward model
bs=128 # batch size
vocab_sz = 15000 # vocab size - 15k / 30k work fine with sentence piece
num_workers=18 # num_workers for the dataloaders
step = 'lm' # language model - don't change

Training Logs + config

model.json contains the parameters the language model was trained with and the statistics (looses and metrics) of the last epoch

{
    "lang": "de",
    "step": "lm",
    "backwards": false,
    "batch_size": 128,
    "vocab_size": 15000,
    "lr": 0.01,
    "num_epochs": 10,
    "drop_mult": 0.5,
    "stats": {
        "train_loss": 2.894167184829712,
        "valid_loss": 2.7784812450408936,
        "accuracy": 0.46221256256103516,
        "perplexity": 16.094558715820312
    }
}

history.csv log of the training metrics (epochs, losses, accuracy, perplexity)

epoch,train_loss,valid_loss,accuracy,perplexity,time
0,3.375441551208496,3.369227886199951,0.3934227228164673,29.05608367919922,23:00
...
9,2.894167184829712,2.7784812450408936,0.46221256256103516,16.094558715820312,22:44

3. Language model fine-tuning on unlabled data

Notebook: 3_ulmfit_lm_finetuning.ipynb

To improve the performance on the downstream-task, the language model should be fine-tuned. We are using a Twitter dataset (GermEval2018/2019), so we fine-tune the LM on unlabled tweets.

To use the notebook on your own dataset, create a .csv-file containing your (unlabled) data in the text column.

Files required from the Language Model (previous step):

  • Model (*model.pth)
  • Vocab (*vocab.pkl)

I am not reusing the SentencePiece-Model from the language model! This could lead to slightly different tokenization but fast.ai (-> language_model_learner()) and the fine-tuning takes care of adding and training unknown tokens! This approch gave slightly better results than reusing the SP-Model from the language model.

4. Train the classifier

Notebook: 4_ulmfit_train_classifier.ipynb

The (fine-tuned) language model now can be used to train a classifier on a (small) labled dataset.

To use the notebook on your own dataset, create a .csv-file containing your texts in the text and labels in the label column.

Files required from the fine-tuned LM (previous step):

  • Encoder (*encoder.pth)
  • Vocab (*vocab.pkl)
  • SentencePiece-Model (spm/spm.model)

5. Use the classifier for predictions / inference on new data

Notebook: 5_ulmfit_inference.ipynb

Evaluation

German pretrained model

Results with an ensemble of forward + backward model (see the inference notebook). Neither the fine-tuning of the LM, nor the training of the classifier was optimized - so there is still room for improvement.

Official results: https://ids-pub.bsz-bw.de/frontdoor/deliver/index/docId/9319/file/Struss_etal._Overview_of_GermEval_task_2_2019.pdf

Task 1 Coarse Classification

Classes: OTHER, OFFENSE

Accuracy: 79,68 F1: 75,96 (best BERT 76,95)

Task 2 Fine Classification

Classes: OTHER, PROFANITY, INSULT, ABUSE

Accuracy: 74,56 % F1: 52,54 (best BERT 53.59)

Dutch model

Compared result with: https://arxiv.org/pdf/1912.09582.pdf
Dataset https://github.com/benjaminvdb/DBRD

Accuracy 93,97 % (best BERT 93,0 %)

Japanese model

Copared results with:

Livedoor news corpus
Accuracy 97,1% (best BERT ~98 %)

Korean model

Compared with: https://github.com/namdori61/BERT-Korean-Classification Dataset: https://github.com/e9t/nsmc Accuracy 89,6 % (best BERT 90,1 %)

Deployment as REST-API

see https://github.com/floleuerer/fastai-docker-deploy

.

Pytorch version of BERT-whitening

BERT-whitening This is the Pytorch implementation of "Whitening Sentence Representations for Better Semantics and Faster Retrieval". BERT-whitening is

Weijie Liu 255 Dec 27, 2022
🚀Clone a voice in 5 seconds to generate arbitrary speech in real-time

English | 中文 Features 🌍 Chinese supported mandarin and tested with multiple datasets: aidatatang_200zh, magicdata, aishell3, data_aishell, and etc. ?

Vega 25.6k Dec 31, 2022
Package for controllable summarization

summarizers summarizers is package for controllable summarization based CTRLsum. currently, we only supports English. It doesn't work in other languag

Hyunwoong Ko 72 Dec 07, 2022
auto_code_complete is a auto word-completetion program which allows you to customize it on your need

auto_code_complete v1.3 purpose and usage auto_code_complete is a auto word-completetion program which allows you to customize it on your needs. the m

RUO 2 Feb 22, 2022
Python package for performing Entity and Text Matching using Deep Learning.

DeepMatcher DeepMatcher is a Python package for performing entity and text matching using deep learning. It provides built-in neural networks and util

461 Dec 28, 2022
A large-scale (194k), Multiple-Choice Question Answering (MCQA) dataset designed to address realworld medical entrance exam questions.

MedMCQA MedMCQA : A Large-scale Multi-Subject Multi-Choice Dataset for Medical domain Question Answering A large-scale, Multiple-Choice Question Answe

MedMCQA 24 Nov 30, 2022
Speech Recognition Database Management with python

Speech Recognition Database Management The main aim of this project is to recogn

Abhishek Kumar Jha 2 Feb 02, 2022
PUA Programming Language written in Python.

pua-lang PUA Programming Language written in Python. Installation git clone https://github.com/zhaoyang97/pua-lang.git cd pua-lang pip install . Try

zy 4 Feb 19, 2022
Code and dataset for the EMNLP 2021 Finding paper "Can NLI Models Verify QA Systems’ Predictions?"

Code and dataset for the EMNLP 2021 Finding paper "Can NLI Models Verify QA Systems’ Predictions?"

Jifan Chen 22 Oct 21, 2022
Statistics and Mathematics for Machine Learning, Deep Learning , Deep NLP

Stat4ML Statistics and Mathematics for Machine Learning, Deep Learning , Deep NLP This is the first course from our trio courses: Statistics Foundatio

Omid Safarzadeh 83 Dec 29, 2022
DomainWordsDict, Chinese words dict that contains more than 68 domains, which can be used as text classification、knowledge enhance task

DomainWordsDict, Chinese words dict that contains more than 68 domains, which can be used as text classification、knowledge enhance task。涵盖68个领域、共计916万词的专业词典知识库,可用于文本分类、知识增强、领域词汇库扩充等自然语言处理应用。

liuhuanyong 357 Dec 24, 2022
This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab.

Speech-Backbones This is the main repository of open-sourced speech technology by Huawei Noah's Ark Lab. Grad-TTS Official implementation of the Grad-

HUAWEI Noah's Ark Lab 295 Jan 07, 2023
PyTorch Implementation of "Non-Autoregressive Neural Machine Translation"

Non-Autoregressive Transformer Code release for Non-Autoregressive Neural Machine Translation by Jiatao Gu, James Bradbury, Caiming Xiong, Victor O.K.

Salesforce 261 Nov 12, 2022
GVT is a generic translation tool for parts of text on the PC screen with Text to Speak functionality.

GVT is a generic translation tool for parts of text on the PC screen with Text to Speech functionality. I wanted to create it because the existing tools that I experimented with did not satisfy me in

Nuked 1 Aug 21, 2022
Weaviate demo with the text2vec-openai module

Weaviate demo with the text2vec-openai module This repository contains an example of how to use the Weaviate text2vec-openai module. When using this d

SeMI Technologies 11 Nov 11, 2022
Official implementations for various pre-training models of ERNIE-family, covering topics of Language Understanding & Generation, Multimodal Understanding & Generation, and beyond.

English|简体中文 ERNIE是百度开创性提出的基于知识增强的持续学习语义理解框架,该框架将大数据预训练与多源丰富知识相结合,通过持续学习技术,不断吸收海量文本数据中词汇、结构、语义等方面的知识,实现模型效果不断进化。ERNIE在累积 40 余个典型 NLP 任务取得 SOTA 效果,并在 G

5.4k Jan 03, 2023
ASCEND Chinese-English code-switching dataset

ASCEND (A Spontaneous Chinese-English Dataset) introduces a high-quality resource of spontaneous multi-turn conversational dialogue Chinese-English code-switching corpus collected in Hong Kong.

CAiRE 11 Dec 09, 2022
Mednlp - Medical natural language parsing and utility library

Medical natural language parsing and utility library A natural language medical

Paul Landes 3 Aug 24, 2022
SurvTRACE: Transformers for Survival Analysis with Competing Events

⭐ SurvTRACE: Transformers for Survival Analysis with Competing Events This repo provides the implementation of SurvTRACE for survival analysis. It is

Zifeng 13 Oct 06, 2022
BERTAC (BERT-style transformer-based language model with Adversarially pretrained Convolutional neural network)

BERTAC (BERT-style transformer-based language model with Adversarially pretrained Convolutional neural network) BERTAC is a framework that combines a

6 Jan 24, 2022