Prompt-learning is the latest paradigm to adapt pre-trained language models (PLMs) to downstream NLP tasks

Overview

An Open-Source Framework for Prompt-learning.


OverviewInstallationHow To UseDocs

Overview

Prompt-learning is the latest paradigm to adapt pre-trained language models (PLMs) to downstream NLP tasks, which modifies the input text with a textual template and directly uses PLMs to conduct pre-trained tasks. This library provides a standard, flexible and extensible framework to deploy the prompt-learning pipeline. OpenPrompt supports loading PLMs directly from huggingface transformers. In the future, we will also support PLMs implemented by other libraries.

What Can You Do via OpenPrompt?

demo

  • Use the implementations of current prompt-learning approaches.* We have implemented various of prompting methods, including templating, verbalizing and optimization strategies under a unified standard. You can easily call and understand these methods.
  • Design your own prompt-learning work. With the extensibility of OpenPrompt, you can quickly practice your prompt-learning ideas.

Installation

Using Git

Clone the repository from github:

git clone https://github.com/thunlp/OpenPrompt.git
cd OpenPrompt
pip install -r requirements.txt
python setup.py install

Modify the code

python setup.py develop

Use OpenPrompt

Base Concepts

A Prompt class contains a (or multiple) Template and a (or multiple) Verbalizer, where the Template class is defined to wrap the original input with templates, and the Verbalizer class is to construct a projection between labels and target words in the current vocabulary.

A PromptModel class combines the Template, Verbalizer and PLM, practically participating in training and inference.

Introduction by a Simple Example

With the modularity and flexibility of OpenPrompt, you can easily develop a prompt-learning pipeline.

Step 1: Define a task

The first step is to determine the current NLP task, think about what’s your data looks like and what do you want from the data! That is, the essence of this step is to determine the classses and the InputExample of the task. For simplicity, we use Sentiment Analysis as an example. tutorial_task.

from openprompt.data_utils import InputExample
classes = [ # There are two classes in Sentiment Analysis, one for negative and one for positive
    "negative",
    "positive"
]
dataset = [ # For simplicity, there's only two examples
    # text_a is the input text of the data, some other datasets may have multiple input sentences in one example.
    InputExample(
        guid = 0,
        text_a = "Albert Einstein was one of the greatest intellects of his time.",
    ),
    InputExample(
        guid = 1,
        text_a = "The film was badly made.",
    ),
]

Step 2: Define a Pre-trained Language Models (PLMs) as backbone.

Choose a PLM to support your task. Different models have different attributes, we encourge you to use OpenPrompt to explore the potential of various PLMs. OpenPrompt is compatible with models on huggingface.

from openprompt.plms import get_model_class
model_class = get_model_class(plm_type = "bert")
model_path = "bert-base-cased"
bertConfig = model_class.config.from_pretrained(model_path)
bertTokenizer = model_class.tokenizer.from_pretrained(model_path)
bertModel = model_class.model.from_pretrained(model_path)

Step 3: Define a Template.

Template is a modifier of the original input text, which is also one of the most important modules in prompt-learning. 

", "It", "was", " "], tokenizer = bertTokenizer, ) ">
from openprompt.prompts import ManualTemplate
promptTemplate = ManualTemplate(
    text = ["
     
      "
     , "It", "was", "
     
      "
     ],
    tokenizer = bertTokenizer,
)

Step 4: Define a Verbalizer

Verbalizer is another important (but not neccessary) in prompt-learning,which projects the original labels (we have defined them as classes, remember?) to a set of label words. Here is an example that we project the negative class to the word bad, and project the positive class to the words good, wonderful, great.

from openprompt.prompts import ManualVerbalizer
promptVerbalizer = ManualVerbalizer(
    classes = classes,
    label_words = {
        "negative": ["bad"],
        "positive": ["good", "wonderful", "great"],
    },
    tokenizer = bertTokenizer,
)

Step 5: Combine them into a PromptModel

Given the task, now we have a PLM, a Template and a Verbalizer, we combine them into a PromptModel. Note that although the example naively combine the three modules, you can actually define some complicated interactions among them.

from openprompt import PromptForClassification
promptModel = PromptForClassification(
    template = promptTemplate,
    model = bertModel,
    verbalizer = promptVerbalizer,
)

Please refer to our documentation for more details.

Datasets

We provide a series of download scripts in the dataset/ folder, feel free to use them to download benchmarks.

Citation

We are working on the technical report...

Contributors

We thank all the contributors to this project, more contributors are welcome!

Ning Ding, Shengding Hu, Weilin Zhao.

Comments
  • An error occurred while using the latest version (1.0.0)

    An error occurred while using the latest version (1.0.0)

    When I use ptr_template in the latest version.The following error occurred. TypeError: __init__() got an unexpected keyword argument 'placeholder_mapping' Version 0.1.1 does not have this problem.

    opened by blacker521 8
  • Failed to run the demo in `tutorial`

    Failed to run the demo in `tutorial`

    command: python tutorial/1.1_mixed_template.py

    output:

      File "tutorial/1.1_mixed_template.py", line 94, in <module>
        logits = prompt_model(inputs)
      File "/home/h/anaconda3/envs/openprompt/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl
        return forward_call(*input, **kwargs)
      File "/home/h/work/OpenPrompt/openprompt/pipeline_base.py", line 241, in forward
        outputs = self.verbalizer.gather_outputs(outputs)
    TypeError: gather_outputs() takes 1 positional argument but 2 were given
    
    opened by huchinlp 7
  • no attribute 'tokenize_one_example'

    no attribute 'tokenize_one_example'

    Hi,

    thank you for your amazing work to ease the users of prompt learning.

    I tried to implement the LMBFF tutorial and happened to meet this error:

    Traceback (most recent call last):
      File "run_lmbff.py", line 116, in <module>
        dataloader = PromptDataLoader(dataset['train'], template, template_generate_tokenizer, template_tokenizer_wrapper, batch_size=len(dataset['train']), decoder_max_length=128) # register all data at once
      File "/home/oryza/playground/OpenPrompt/openprompt/pipeline_base.py", line 101, in __init__
        self.tokenize()
      File "/home/oryza/playground/OpenPrompt/openprompt/pipeline_base.py", line 137, in tokenize
        inputfeatures = InputFeatures(**self.tokenizer_wrapper.tokenize_one_example(wrapped_example, self.teacher_forcing), **wrapped_example[1]).to_tensor()
    AttributeError: 'T5Tokenizer' object has no attribute 'tokenize_one_example'
    

    This is my pip list:

    Package            Version   Editable project location
    ------------------ --------- ---------------------------------
    aiohttp            3.8.1
    aiosignal          1.2.0
    async-timeout      4.0.2
    asynctest          0.13.0
    attrs              21.4.0
    certifi            2021.10.8
    charset-normalizer 2.0.12
    click              8.1.2
    datasets           2.0.0
    dill               0.3.4
    filelock           3.6.0
    frozenlist         1.3.0
    fsspec             2022.3.0
    huggingface-hub    0.5.1
    idna               3.3
    importlib-metadata 4.11.3
    joblib             1.1.0
    multidict          6.0.2
    multiprocess       0.70.12.2
    nltk               3.7
    numpy              1.21.5
    openprompt         1.0.0     /home/oryza/playground/OpenPrompt
    packaging          21.3
    pandas             1.3.5
    pip                22.0.4
    protobuf           3.20.0
    pyarrow            7.0.0
    pyparsing          3.0.8
    python-dateutil    2.8.2
    pytz               2022.1
    PyYAML             6.0
    regex              2022.3.15
    requests           2.27.1
    responses          0.18.0
    rouge              1.0.0
    sacremoses         0.0.49
    scikit-learn       1.0.2
    scipy              1.7.3
    sentencepiece      0.1.96
    setuptools         41.2.0
    six                1.16.0
    sklearn            0.0
    tensorboardX       2.5
    threadpoolctl      3.1.0
    tokenizers         0.10.3
    torch              1.11.0
    tqdm               4.64.0
    transformers       4.10.0
    typing_extensions  4.1.1
    urllib3            1.26.9
    xxhash             3.0.0
    yacs               0.1.8
    yarl               1.7.2
    zipp               3.8.0
    

    Do you have any idea about the error? I read in another thread about installing SentencePiece to solve this problem but my sentencepiece is already there.

    Thank you in advance!

    Best, Oryza

    opened by khairunnisaor 6
  • 使用 InputExample() 构建中文数据集,汉字读入为Unicode编码

    使用 InputExample() 构建中文数据集,汉字读入为Unicode编码

    Code

    dataset = {}
    dataset["train"] = []
    for index,data in train_dataset.iterrows():
        input_example = InputExample(text_a = data["text"], label=data["class"], guid=data["id"])
        dataset["train"].append(input_example)
    print(dataset["train"][10])
    

    output

    {
      "guid": 13,
      "label": 2,
      "meta": {},
      "text_a": "\u522b\u6025\u3002\u8bf4\u4e0d\u51c6\u7684\u3002\u7b2c\u4e00\u6b21\u8fc7\u7684\u65f6\u5019\u4e5f\u5ba1\u6838\u4e86\u5341\u51e0\u5929\u3002\u4e0d\u8fc7\u6700\u540e\u5168\u989d\u5ea6\u901a\u8fc7\u3002\u5229\u606f\u9ad8\u5c31\u6ca1\u7528",
      "text_b": "",
      "tgt_text": null
    }
    
    opened by terence1023 6
  • bug:TypeError: _forward_unimplemented() got an unexpected keyword argument 'output_hidden_states'

    bug:TypeError: _forward_unimplemented() got an unexpected keyword argument 'output_hidden_states'

    File "/workspace/knowledgegraphcommon/business/text_classification/prompt/text_classification_prompt.py", line 185, in train logits = self.prompt_model(inputs) File "/root/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/workspace/OpenPrompt/openprompt/pipeline_base.py", line 263, in forward outputs = self.prompt_model(batch) File "/root/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) File "/workspace/OpenPrompt/openprompt/pipeline_base.py", line 185, in forward outputs = self.plm(**input_batch, output_hidden_states=True) File "/root/miniconda3/lib/python3.9/site-packages/torch/nn/modules/module.py", line 1051, in _call_impl return forward_call(*input, **kwargs) TypeError: _forward_unimplemented() got an unexpected keyword argument 'output_hidden_states'

    opened by franztao 5
  • Use 2.1_conditional_generation.py , after fine-tuning, it only generates the same char.  Why ?

    Use 2.1_conditional_generation.py , after fine-tuning, it only generates the same char. Why ?

    use 2.1_conditional_generation.py in datasets/CondGen/webnlg_2017/

    generated txt: ''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''''

    opened by 353xiong 4
  • How to handle label words in Chinese?

    How to handle label words in Chinese?

    Label words may be tokenized into multiple tokens. In Chinese, label words are likely be tokenized into characters. So, how to handle label words that consist of more than one characters?

    opened by NLPpupil 4
  • Question about updating the repository

    Question about updating the repository

    How do you keep this repository up to date? Do I need to clone the entire library every time and update it again as follows:

    git clone https://github.com/thunlp/OpenPrompt.git
    cd OpenPrompt
    pip install -r requirements.txt
    python setup.py install
    
    opened by probe2 4
  • TypeError: __init__() missing 1 required positional argument: 'tokenizer_wrapper_class'

    TypeError: __init__() missing 1 required positional argument: 'tokenizer_wrapper_class'

    When going through the tutorial

    In step 6, raised the errors below:

    data_loader = PromptDataLoader( ... dataset = dataset, ... tokenizer = bertTokenizer, ... template = promptTemplate, ... ) Traceback (most recent call last): File "", line 1, in TypeError: init() missing 1 required positional argument: 'tokenizer_wrapper_class'

    opened by dongxiaohuang 4
  • 初始模型为bert,roberta时的特殊注意事项?

    初始模型为bert,roberta时的特殊注意事项?

    作者好,我使用t5,gpt2作为初始模型进行训练,验证集指标稳步上升没有问题,但是使用bert,roberta,electra的时候就出现问题,验证集指标一直是0.5203649397197784,想请教下,这是什么原因导致的? example: input_example = InputExample(text_a=line['text_pair'], text_b=line['text'], label=int(line['label']), guid=i)

    初始化模型: plm, tokenizer, model_config, WrapperClass = load_plm("bert", "bert-base-chinese")

    模板: template_text = '{"placeholder":"text_a"}{"placeholder":"text_b"}的情感倾向是{"mask"}.'

    verbalizer: myverbalizer = ManualVerbalizer(tokenizer, num_classes=2, label_words=[["负"], ["正"]])

    训练详情详情: Epoch 1, average loss: 3.3326262831687927 Epoch 1, average loss: 0.7787383239444913 Epoch 1, average loss: 0.7572225447236699 Epoch 1, average loss: 0.738348940730161 Epoch 1, average loss: 0.7296206120358232 Epoch 1, average loss: 0.7233000741192647 Epoch 1, average loss: 0.7194478078047589 Epoch 1, average loss: 0.7165702087618587 Epoch 1, average loss: 0.7136984900552019 Epoch 1, average loss: 0.7121389577100447 Epoch 1, average loss: 0.7103113287874931 Epoch 1, average loss: 0.7093091916511776 Epoch 1, average loss: 0.7082642679232515 Epoch 1, average loss: 0.7077864898547248 Epoch 1, average loss: 0.7074250399318126 Epoch 1, average loss: 0.7070826163498072 Epoch 1, average loss: 0.7063648934145984 Epoch 1, average loss: 0.7059904860616641 Epoch 1, average loss: 0.70552960885168 Epoch 1, average loss: 0.7050825911213101 Epoch 1, average loss: 0.7048186851440073 0.5203649397197784 Epoch 2, average loss: 0.6653246581554413 Epoch 2, average loss: 0.7000961779363898 Epoch 2, average loss: 0.6992864966194495 Epoch 2, average loss: 0.697152165840576 Epoch 2, average loss: 0.6964660410873108 Epoch 2, average loss: 0.6976269556980793 Epoch 2, average loss: 0.6974568861339253 Epoch 2, average loss: 0.6972834053179063 Epoch 2, average loss: 0.6972271847809284 Epoch 2, average loss: 0.6969758515266203 Epoch 2, average loss: 0.6968832315801383 Epoch 2, average loss: 0.6966261330479784 Epoch 2, average loss: 0.6964328451033501 Epoch 2, average loss: 0.6963928808987537 Epoch 2, average loss: 0.6964452584858793 Epoch 2, average loss: 0.6963973140276998 Epoch 2, average loss: 0.696516385802325 Epoch 2, average loss: 0.6964337500765108 Epoch 2, average loss: 0.6963930293084604 Epoch 2, average loss: 0.6962399163065522 Epoch 2, average loss: 0.7043500146401878 0.5203649397197784

    opened by qdchenxiaoyan 3
  • How to accelerate the downloading of pytorch_model.bin?

    How to accelerate the downloading of pytorch_model.bin?

    image hello, when I try to run the example in the readme, I found the download speed is too slow. I use miniconda with python3.8 and cuda11.1. Are there any ways to speed up the download?
    opened by FelliYang 3
  • Question about the test score in tutorial/3.1_lmbff.py

    Question about the test score in tutorial/3.1_lmbff.py

    Hello, I ran this tutorial code implementing lmbff and the final test score was 0.5091743119266054.

    This performance is too low, so I checked the evaluation score during training and the prediction results. In fact, the model scored 0.5 in each epoch, and the final prediction was all 0. Exactly, 0.509 equals to the score predicting majority as reported in the paper of lmbff.

    I did not check in more detail, but apparently there is something wrong with the training of the model here.

    opened by ZekaiShao25 0
  • 意图分类任务,label是中文,怎么定义label words

    意图分类任务,label是中文,怎么定义label words

    根据文档https://thunlp.github.io/OpenPrompt/notes/verbalizer.html, label_words里面定义的都是一个token。当处理中文文本分类任务时,label是多个字,应该怎么定义呢?

    mytemplate = ManualTemplate(tokenizer=tokenizer, text="""{"meta":"raw"}的意图是{"mask"}""") 比如 意图是 "天气"、"音乐"、"电影"。

    opened by lonelydancer 1
  • PromptForClassification shouldn't change plm

    PromptForClassification shouldn't change plm

    Hey! When I use freeze_plm in PromptForClassification with freeze_plm=True and then freeze_plm=False, the PLM would still behave as though it is frozen. This does not seem like an expected behavior in this case. i.e:

    plm, tokenizer, model_config, WrapperClass = load_plm("roberta", "roberta-base")
    prompt_model = PromptForClassification(plm=plm, template=promptTemplate, verbalizer=promptVerbalizer, 
                                           freeze_plm=True)
    prompt_model = PromptForClassification(plm=plm, template=promptTemplate, verbalizer=promptVerbalizer, 
                                           freeze_plm=False) 
    # do training.. behaves as though PLM frozen, 
    # e.g. outputs "RuntimeError: element 0 of tensors does not require grad and does not have a grad_fn"
    
    opened by guyd1995 0
  • Question about 2.1 conditional generation

    Question about 2.1 conditional generation

    When I run this code, I get the following error: PermissionError: [Errno 13] Permission denied: '../../Generated_sentence_webnlg_gpt2_False.txt'. After I gave the entire folder all permissions, I still didn't solve the problem

    opened by ZHUANG-jt 0
  • Save and re-load a trained prompt model

    Save and re-load a trained prompt model

    Currently, I am saving the model using the following: torch.save(prompt_model.state_dict(), PATH)

    How can we load this back to test performance on other data? Pytorch tutorial says to use the following. model = TheModelClass(*args, **kwargs) model.load_state_dict(torch.load(PATH)) model.eval()

    What would TheModelClass be? An example would be really appreciated.

    opened by agrimaseth 0
Releases(v1.0.0)
Owner
THUNLP
Natural Language Processing Lab at Tsinghua University
THUNLP
pyupbit 라이브러리를 활용하여 upbit에서 비트코인을 자동매매하는 코드입니다. 조코딩 유튜브 채널에서 자세한 강의 영상을 보실 수 있습니다.

파이썬 비트코인 투자 자동화 강의 코드 by 유튜브 조코딩 채널 pyupbit 라이브러리를 활용하여 upbit 거래소에서 비트코인 자동매매를 하는 코드입니다. 파일 구성 test.py : 잔고 조회 (1강) backtest.py : 백테스팅 코드 (2강) bestK.p

조코딩 JoCoding 186 Dec 29, 2022
Question answering app is used to answer for a user given question from user given text.

Question answering app is used to answer for a user given question from user given text.It is created using HuggingFace's transformer pipeline and streamlit python packages.

Siva Prakash 3 Apr 05, 2022
A demo for end-to-end English and Chinese text spotting using ABCNet.

ABCNet_Chinese A demo for end-to-end English and Chinese text spotting using ABCNet. This is an old model that was trained a long ago, which serves as

Yuliang Liu 45 Oct 04, 2022
translate using your voice

speech-to-text-translator Usage translate using your voice description this project makes translating a word easy, all you have to do is speak and...

1 Oct 18, 2021
Data and evaluation code for the paper WikiNEuRal: Combined Neural and Knowledge-based Silver Data Creation for Multilingual NER (EMNLP 2021).

Data and evaluation code for the paper WikiNEuRal: Combined Neural and Knowledge-based Silver Data Creation for Multilingual NER. @inproceedings{tedes

Babelscape 40 Dec 11, 2022
The implementation of Parameter Differentiation based Multilingual Neural Machine Translation

The implementation of Parameter Differentiation based Multilingual Neural Machine Translation .

Qian Wang 21 Dec 17, 2022
A benchmark for evaluation and comparison of various NLP tasks in Persian language.

Persian NLP Benchmark The repository aims to track existing natural language processing models and evaluate their performance on well-known datasets.

Mofid AI 68 Dec 19, 2022
Text Normalization(文本正则化)

Text Normalization(文本正则化) 任务描述:通过机器学习算法将英文文本的“手写”形式转换成“口语“形式,例如“6ft”转换成“six feet”等 实验结果 XGBoost + bag-of-words: 0.99159 XGBoost+Weights+rules:0.99002

Jason_Zhang 0 Feb 26, 2022
Telegram bot to auto post messages of one channel in another channel as soon as it is posted, without the forwarded tag.

Channel Auto-Post Bot This bot can send all new messages from one channel, directly to another channel (or group, just in case), without the forwarded

Aditya 128 Dec 29, 2022
In this project, we compared Spanish BERT and Multilingual BERT in the Sentiment Analysis task.

Applying BERT Fine Tuning to Sentiment Classification on Amazon Reviews Abstract Sentiment analysis has made great progress in recent years, due to th

Alexander Leonardo Lique Lamas 5 Jan 03, 2022
Code for CVPR 2021 paper: Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning

Revamping Cross-Modal Recipe Retrieval with Hierarchical Transformers and Self-supervised Learning This is the PyTorch companion code for the paper: A

Amazon 69 Jan 03, 2023
Creating an Audiobook (mp3 file) using a Ebook (epub) using BeautifulSoup and Google Text to Speech

epub2audiobook Creating an Audiobook (mp3 file) using a Ebook (epub) using BeautifulSoup and Google Text to Speech Input examples qual a pasta do seu

7 Aug 25, 2022
小布助手对话短文本语义匹配的一个baseline

oppo-text-match 小布助手对话短文本语义匹配的一个baseline 模型 参考:https://kexue.fm/archives/8213 base版本线下大概0.952,线上0.866(单模型,没做K-flod融合)。 训练 测试环境:tensorflow 1.15 + keras

苏剑林(Jianlin Su) 132 Dec 14, 2022
Built for cleaning purposes in military institutions

Ferramenta do AL Construído para fins de limpeza em instituições militares. Instalação Requer python = 3.2 pip install -r requirements.txt Usagem Exe

0 Aug 13, 2022
HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis

HiFi-GAN: Generative Adversarial Networks for Efficient and High Fidelity Speech Synthesis Jungil Kong, Jaehyeon Kim, Jaekyoung Bae In our paper, we p

Jungil Kong 1.1k Jan 02, 2023
Implementation of "Adversarial purification with Score-based generative models", ICML 2021

Adversarial Purification with Score-based Generative Models by Jongmin Yoon, Sung Ju Hwang, Juho Lee This repository includes the official PyTorch imp

15 Dec 15, 2022
Code to use Augmented Shapiro Wilks Stopping, as well as code for the paper "Statistically Signifigant Stopping of Neural Network Training"

This codebase is being actively maintained, please create and issue if you have issues using it Basics All data files are included under losses and ea

Justin Terry 32 Nov 09, 2021
🌸 fastText + Bloom embeddings for compact, full-coverage vectors with spaCy

floret: fastText + Bloom embeddings for compact, full-coverage vectors with spaCy floret is an extended version of fastText that can produce word repr

Explosion 222 Dec 16, 2022
(ACL-IJCNLP 2021) Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models.

BERT Convolutions Code for the paper Convolutions and Self-Attention: Re-interpreting Relative Positions in Pre-trained Language Models. Contains expe

mlpc-ucsd 21 Jul 18, 2022