🍊 PAUSE (Positive and Annealed Unlabeled Sentence Embedding), accepted by EMNLP'2021 🌴

Overview

PAUSE: Positive and Annealed Unlabeled Sentence Embedding

Sentence embedding refers to a set of effective and versatile techniques for converting raw text into numerical vector representations that can be used in a wide range of natural language processing (NLP) applications. The majority of these techniques are either supervised or unsupervised. Compared to the unsupervised methods, the supervised ones make less assumptions about optimization objectives and usually achieve better results. However, the training requires a large amount of labeled sentence pairs, which is not available in many industrial scenarios. To that end, we propose a generic and end-to-end approach -- PAUSE (Positive and Annealed Unlabeled Sentence Embedding), capable of learning high-quality sentence embeddings from a partially labeled dataset, which effectively learns sentence embeddings from PU datasets by jointly optimizing the supervised and PU loss. The main highlights of PAUSE include:

  • good sentence embeddings can be learned from datasets with only a few positive labels;
  • it can be trained in an end-to-end fashion;
  • it can be directly applied to any dual-encoder model architecture;
  • it is extended to scenarios with an arbitrary number of classes;
  • polynomial annealing of the PU loss is proposed to stabilize the training;
  • our experiments (reproduction steps are illustrated below) show that PAUSE constantly outperforms baseline methods.

This repository contains Tensorflow implementation of PAUSE to reproduce the experimental results. Upon using this repo for your work, please cite:

@inproceedings{cao2021pause,
  title={PAUSE: Positive and Annealed Unlabeled Sentence Embedding},
  author={Cao, Lele and Larsson, Emil and von Ehrenheim, Vilhelm and Cavalcanti Rocha, Dhiana Deva and Martin, Anna and Horn, Sonja},
  booktitle={Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing (EMNLP)},
  year={2021},
  url={https://arxiv.org/abs/2109.03155}
}

Prerequisites

Install virtual environment first to avoid breaking your native environment. If you use Anaconda, do

conda update conda
conda create --name py37-pause python=3.7
conda activate py37-pause

Then install the dependent libraries:

pip install -r requirements.txt

Unsupervised STS

Models are trained on a combination of the SNLI and Multi-Genre NLI datasets, which contain one million sentence pairs annotated with three labels: entailment, contradiction and neutral. The trained model is tested on the STS 2012-2016, STS benchmark, and SICK-Relatedness (SICK-R) datasets, which have labels between 0 and 5 indicating the semantic relatedness of sentence pairs.

Training

Example 1: train PAUSE-small using 5% labels for 10 epochs

python train_nli.py \
  --batch_size=1024 \
  --train_epochs=10 \
  --model=small \
  --pos_sample_prec=5

Example 2: train PAUSE-base using 30% labels for 20 epochs

python train_nli.py \
  --batch_size=1024 \
  --train_epochs=20 \
  --model=base \
  --pos_sample_prec=30

To check the parameters, run

python train_nli.py --help

which will print the usage as follows.

usage: train_nli.py [-h] [--model MODEL]
                    [--pretrained_weights PRETRAINED_WEIGHTS]
                    [--train_epochs TRAIN_EPOCHS] [--batch_size BATCH_SIZE]
                    [--train_steps_per_epoch TRAIN_STEPS_PER_EPOCH]
                    [--max_seq_len MAX_SEQ_LEN] [--prior PRIOR]
                    [--train_lr TRAIN_LR] [--pos_sample_prec POS_SAMPLE_PREC]
                    [--log_dir LOG_DIR] [--model_dir MODEL_DIR]

optional arguments:
  -h, --help            show this help message and exit
  --model MODEL         The tfhub link for the base embedding model
  --pretrained_weights PRETRAINED_WEIGHTS
                        The pretrained model if any
  --train_epochs TRAIN_EPOCHS
                        The max number of training epoch
  --batch_size BATCH_SIZE
                        Training mini-batch size
  --train_steps_per_epoch TRAIN_STEPS_PER_EPOCH
                        Step interval of evaluation during training
  --max_seq_len MAX_SEQ_LEN
                        The max number of tokens in the input
  --prior PRIOR         Expected ratio of positive samples
  --train_lr TRAIN_LR   The maximum learning rate
  --pos_sample_prec POS_SAMPLE_PREC
                        The percentage of sampled positive examples used in
                        training; should be one of 1, 10, 30, 50, 70
  --log_dir LOG_DIR     The path where the logs are stored
  --model_dir MODEL_DIR
                        The path where models and weights are stored

Testing

After the model is trained, you will be prompted to where the model is saved, e.g. ./artifacts/model/20210517-131724, where the directory name (20210517-131724) is the model ID. To test the model with that ID, run

python test_sts.py --model=20210517-131724

The test result on STS datasets will be printed on console and also saved in file ./artifacts/test/sts_20210517-131724.txt

Supervised STS

Train

You can continue to finetune a pertained model on supervised STSb. For example, assume we have trained a PAUSE model based on small BERT (say located at ./artifacts/model/20210517-131725), if we want to finetune the model on STSb for 2 epochs, we can run

python ft_stsb.py \
  --model=small \
  --train_epochs=2 \
  --pretrained_weights=./artifacts/model/20210517-131725

Note that it is important to match the model size (--model) with the pretrained model size (--pretrained_weights).

Testing

After the model is finetuned, you will be prompted to where the model is saved, e.g. ./artifacts/model/20210517-131726, where the directory name (20210517-131726) is the model ID. To test the model with that ID, run

python ft_stsb_test.py --model=20210517-131726

SentEval evaluation

To evaluate the PAUSE embeddings using SentEval (preferably using GPU), you need to download the data first:

cd ./data/downstream
./get_transfer_data.bash
cd ../..

Then, run the sent_eval.py script:

python sent_eval.py \
  --data_path=./data \
  --model=20210328-212801

where the --model parameter specifies the ID of the model you want to evaluate. By default, the model should exist in folder ./artifacts/model/embed. If you want to evaluate a trained model in our public GCS (gs://motherbrain-pause/model/...), please run (e.g. PAUSE-NLI-base-50%):

python sent_eval.py \
  --data_path=./data \
  --model_location=gcs \
  --model=20210329-065047

We provide the following models for demonstration purposes:

Model Model ID
PAUSE-NLI-base-100% 20210414-162525
PAUSE-NLI-base-70% 20210328-212801
PAUSE-NLI-base-50% 20210329-065047
PAUSE-NLI-base-30% 20210329-133137
PAUSE-NLI-base-10% 20210329-180000
PAUSE-NLI-base-5% 20210329-205354
PAUSE-NLI-base-1% 20210329-195024
You might also like...
Code for
Code for "Parallel Instance Query Network for Named Entity Recognition", accepted at ACL 2022.

README Code for Two-stage Identifier: "Parallel Instance Query Network for Named Entity Recognition", accepted at ACL 2022. For details of the model a

A sentence aligner for comparable corpora

About Yalign is a tool for extracting parallel sentences from comparable corpora. Statistical Machine Translation relies on parallel corpora (eg.. eur

Sentence Embeddings with BERT & XLNet

Sentence Transformers: Multilingual Sentence Embeddings using BERT / RoBERTa / XLM-RoBERTa & Co. with PyTorch This framework provides an easy method t

Extract Keywords from sentence or Replace keywords in sentences.
Extract Keywords from sentence or Replace keywords in sentences.

FlashText This module can be used to replace keywords in sentences or extract keywords from sentences. It is based on the FlashText algorithm. Install

Sentence Embeddings with BERT & XLNet

Sentence Transformers: Multilingual Sentence Embeddings using BERT / RoBERTa / XLM-RoBERTa & Co. with PyTorch This framework provides an easy method t

Extract Keywords from sentence or Replace keywords in sentences.
Extract Keywords from sentence or Replace keywords in sentences.

FlashText This module can be used to replace keywords in sentences or extract keywords from sentences. It is based on the FlashText algorithm. Install

Sentence boundary disambiguation tool for Japanese texts (日本語文境界判定器)

Bunkai Bunkai is a sentence boundary (SB) disambiguation tool for Japanese texts. Quick Start $ pip install bunkai $ echo -e '宿を予約しました♪!まだ2ヶ月も先だけど。早すぎ

SimCSE: Simple Contrastive Learning of Sentence Embeddings
SimCSE: Simple Contrastive Learning of Sentence Embeddings

SimCSE: Simple Contrastive Learning of Sentence Embeddings This repository contains the code and pre-trained models for our paper SimCSE: Simple Contr

Language-Agnostic SEntence Representations

LASER Language-Agnostic SEntence Representations LASER is a library to calculate and use multilingual sentence embeddings. NEWS 2019/11/08 CCMatrix is

Releases(1.0)
Utilizing RBERT model for KLUE Relation Extraction task

RBERT for Relation Extraction task for KLUE Project Description Relation Extraction task is one of the task of Korean Language Understanding Evaluatio

snoop2head 14 Nov 15, 2022
WikiPron - a command-line tool and Python API for mining multilingual pronunciation data from Wiktionary

WikiPron WikiPron is a command-line tool and Python API for mining multilingual pronunciation data from Wiktionary, as well as a database of pronuncia

213 Jan 01, 2023
自然言語で書かれた時間情報表現を抽出/規格化するルールベースの解析器

ja-timex 自然言語で書かれた時間情報表現を抽出/規格化するルールベースの解析器 概要 ja-timex は、現代日本語で書かれた自然文に含まれる時間情報表現を抽出しTIMEX3と呼ばれるアノテーション仕様に変換することで、プログラムが利用できるような形に規格化するルールベースの解析器です。

Yuki Okuda 116 Nov 09, 2022
Japanese synonym library

chikkarpy chikkarpyはchikkarのPython版です。 chikkarpy is a Python version of chikkar. chikkarpy は Sudachi 同義語辞書を利用し、SudachiPyの出力に同義語展開を追加するために開発されたライブラリです。

Works Applications 48 Dec 14, 2022
Applying "Load What You Need: Smaller Versions of Multilingual BERT" to LaBSE

smaller-LaBSE LaBSE(Language-agnostic BERT Sentence Embedding) is a very good method to get sentence embeddings across languages. But it is hard to fi

Jeong Ukjae 13 Sep 02, 2022
Tools, wrappers, etc... for data science with a concentration on text processing

Rosetta Tools for data science with a focus on text processing. Focuses on "medium data", i.e. data too big to fit into memory but too small to necess

207 Nov 22, 2022
[KBS] Aspect-based sentiment analysis via affective knowledge enhanced graph convolutional networks

#Sentic GCN Introduction This repository was used in our paper: Aspect-Based Sentiment Analysis via Affective Knowledge Enhanced Graph Convolutional N

Akuchi 35 Nov 16, 2022
EMNLP'2021: Can Language Models be Biomedical Knowledge Bases?

BioLAMA BioLAMA is biomedical factual knowledge triples for probing biomedical LMs. The triples are collected and pre-processed from three sources: CT

DMIS Laboratory - Korea University 41 Nov 18, 2022
MicBot - MicBot uses Google Translate to speak everyone's chat messages

MicBot MicBot uses Google Translate to speak everyone's chat messages. It can al

2 Mar 09, 2022
nlp-tutorial is a tutorial for who is studying NLP(Natural Language Processing) using Pytorch

nlp-tutorial is a tutorial for who is studying NLP(Natural Language Processing) using Pytorch. Most of the models in NLP were implemented with less than 100 lines of code.(except comments or blank li

Tae-Hwan Jung 11.9k Jan 08, 2023
Official PyTorch implementation of Time-aware Large Kernel (TaLK) Convolutions (ICML 2020)

Time-aware Large Kernel (TaLK) Convolutions (Lioutas et al., 2020) This repository contains the source code, pre-trained models, as well as instructio

Vasileios Lioutas 28 Dec 07, 2022
FedNLP: A Benchmarking Framework for Federated Learning in Natural Language Processing

FedNLP is a research-oriented benchmarking framework for advancing federated learning (FL) in natural language processing (NLP). It uses FedML repository as the git submodule. In other words, FedNLP

FedML-AI 216 Nov 27, 2022
Library for fast text representation and classification.

fastText fastText is a library for efficient learning of word representations and sentence classification. Table of contents Resources Models Suppleme

Facebook Research 24.1k Jan 05, 2023
This repository details the steps in creating a Part of Speech tagger using Trigram Hidden Markov Models and the Viterbi Algorithm without using external libraries.

POS-Tagger This repository details the creation of a Part-of-Speech tagger using Trigram Hidden Markov Models to predict word tags in a word sequence.

Raihan Ahmed 1 Dec 09, 2021
BeautyNet is an AI powered model which can tell you whether you're beautiful or not.

BeautyNet BeautyNet is an AI powered model which can tell you whether you're beautiful or not. Download Dataset from here:https://www.kaggle.com/gpios

Ansh Gupta 0 May 06, 2022
PyTorch Implementation of "Bridging Pre-trained Language Models and Hand-crafted Features for Unsupervised POS Tagging" (Findings of ACL 2022)

Feature_CRF_AE Feature_CRF_AE provides a implementation of Bridging Pre-trained Language Models and Hand-crafted Features for Unsupervised POS Tagging

Jacob Zhou 6 Apr 29, 2022
Augmenty is an augmentation library based on spaCy for augmenting texts.

Augmenty: The cherry on top of your NLP pipeline Augmenty is an augmentation library based on spaCy for augmenting texts. Besides a wide array of high

Kenneth Enevoldsen 124 Dec 29, 2022
Kerberoast with ACL abuse capabilities

targetedKerberoast targetedKerberoast is a Python script that can, like many others (e.g. GetUserSPNs.py), print "kerberoast" hashes for user accounts

Shutdown 213 Dec 22, 2022
Line as a Visual Sentence: Context-aware Line Descriptor for Visual Localization

Line as a Visual Sentence with LineTR This repository contains the inference code, pretrained model, and demo scripts of the following paper. It suppo

SungHo Yoon 158 Dec 27, 2022
Chinese Named Entity Recognization (BiLSTM with PyTorch)

BiLSTM-CRF for Name Entity Recognition PyTorch version A PyTorch implemention of Bi-LSTM-CRF model for Chinese Named Entity Recognition. 使用 PyTorch 实现

5 Jun 01, 2022