TPlinker for NER 中文/英文命名实体识别

Overview

TPLinker-NER

喜欢本项目的话,欢迎点击右上角的star,感谢每一个点赞的你。

项目介绍

本项目是参考 TPLinker 中HandshakingTagging思想,将TPLinker由原来的关系抽取(RE)模型修改为命名实体识别(NER)模型。

【注意】 事实上,本项目使用的base模型是TPLinker_plus,这是因为若严格地按照TPLinker的设计思想,在NER任务上几乎无法使用。具体原因,在Q&A部分有介绍。

TPLinker-NER相比于之前的序列标注、半指针-半标注等NER模型,更加有效的解决了实体嵌套问题。因为TPLinker本身在RE领域已经取得了优异的成绩,而TPLinker-NER作为从中提取的子功能,理论上效果也会太差。 由于本人拥有的算力有限,无法在大规模语料库上进行试验,此次只在 CLUENER 数据集上做了实验。

CLUENER验证集F1

Best F1 on dev: 0.9111

Usage

实验环境

本次实验进行时Python版本为3.6,其他主要的第三方库包括:

  • pytorch==1.8.1
  • wandb==0.10.26 #for logging the result
  • glove-python-binary==0.1.0
  • transformers==4.1.1
  • tqdm==4.54.1

NOTE:

  1. wandb 是一款优秀的机器学习可视化库。本项目默认未启用wandb,如果想使用wandb管理日志,请在tplinker_plus_ner/config.py文件中修改相关配置即可。
  2. 如果你使用的Windows系统且尚未安装Glove库,或者只想用BERT作编码器,主文件请使用train_only_bert.py

数据准备

格式要求

TPLinker-NER约定数据集的的格式如下:

  • 训练集train_data.json与验证集valid_data.json
[
    {
        "id":"",
        "text":"原始语句",
        "entity_list":[{"text":"实体","type":"实体类型","char_span":"实体char级别的span","token_span":"实体token级别的span"}]
    },
    ...
]
  • 测试集test_data.json
[
    {
        "id":"",
        "text":"原始语句"
    },
    ...
]

数据转换

如果需要将其他格式的数据集转换到TPLinker-NER,请参考raw_data/convert_dataset.py的转换逻辑。

数据存放

准备好的数据需放在data4bert/{exp_name}data4bilstm/{exp_name}中,其中exp_nametplinker_plus_ner/config.py中配置的实验名。

预训练模型与词向量

请下载Bert的中文预训练模型bert-base-chinese存放至pretrained_models/,并在tplinker_plus_ner/config.py中配置正确的bert_path

如果你想使用BiLSTM,需要准备预训练word embeddings存放至pretrained_emb/,如何预训练请参考preprocess/Pretrain_Word_Embedding.ipynb

Train

请阅读tplinker_plus_ner/config.py中的内容,并根据自己的需求修改配置与超参数。

然后开始训练

cd tplinker_plus_ner
python train.py

Evaluation

你仍然需要在tplinker_plus_ner/config.py中配置Evaluation相关参数。尤其注意eval_config中的model_state_dict_dir参数值与你所用的日志模块一致。

然后开始Evaluate

cd tplinker_plus_ner
python evaluate.py

Q&A

以下问题为个人在改写项目的想法,仅供参考,如有错误,欢迎指正。

  1. 为什么TPLinker不适合直接用在NER上,而要用TPLinker_plus?

    个人理解:讨论这个问题就要先了解最初的TPLinker设计模式,除了HandShaking外,作者还预定义了三大种类型ent, head_rel, tail_rel,每个类型下又有子类型,ent:{"O":0,"ENT-H2T":1}, head_rel:{"O":0, "REL-SH2OH":1, "REL-OH2SH":2}, head_tail:{"O":0, "REL-ST2OT":1, "REL-OT2ST":2}。在模型实际做分类时,三大类之间是独立的。以head_rel为例,其原数据整理得y_true矩阵shape为(batch_size, rel_size, shaking_seq_len),这里rel_size即有多少种关系。模型预测的结果y_pred矩阵shape为(batch_size, rel_size, shaking_seq_len, 3)。可以想象,这样的y_true矩阵已经很稀疏了,只有0,1,2三种标签。而如果换做NER,这样(batch_size, ent_size, shaking_seq_len)的矩阵将更加稀疏(只有0,1两种标签),对于一个(ent_size,shaking_seq_len)的矩阵来说,可能只有1至2个地方为1,这将导致模型无限地将预测结果都置为0,从而学习失败(事实实验也是这样)。作者在TPLinker中是如何解决这一问题的呢?其实作者用了个小trick回避了这一问题,具体做法是不再区分实体的类型,将所有实体都看作是DEFAULT类型,这样就把y_true压缩成了(batch_size,shaking_seq_len),降低了矩阵的稀疏性。作者对于这一做法的解释是"Because it is not necessary to recognize the type of entities for the relation extraction task since a predefined relation usually has fixed types for its subject and object.",即实体类别信息对关系抽取不太重要,因为每种关系某种程度上已经预定义了实体类型。综上,如果想直接把TPLinker应用到NER上是不合适的。

    而TPLinker_plus改变了这一做法,他不再将ent, head_rel, tail_rel当做三个独立任务,而是将所有的关系与标签组合,形成一个大的标签库,只用一个HandShaking矩阵表示句子中的所有关系。举个例子,假设有以下3个关系(或实体类型):主演、出生于、作者,那么其与标记标签EH-ET,SH-OH,OH-SH,ST-OT,OT-ST组合后会产生15种tag,这极大地扩充了标签库。相应的,TPLinker_plus的输入也就变成了(batch_size,shaking_seq_len,tag_size)。这样的改变让矩阵中的非0值相对增多,降低了矩阵的稀疏性。(这只是一方面原因,更加重要原因的请参考问题2)

  2. TPLinker_plus还做了哪些优化?

    • 任务模式的转变:从问题1最后的结论可以看出,TPLinker_plus扩充标签库的同时,也将模型任务由原来的多分类任务转变成了多标签分类任务,即每个句子形成的shaking_seq可以出现多个的标签,且出现的数量不确定。形如
    # 设句子的seq_len=10,那么shaking_seq=55
    # 标签组合有8种tag_size=8
    [
        [0,0,1,0,1,0,1,0],
        [1,0,1,0,0,0,0,1],
        ...
        # 剩下的53行
    ]
  3. TPLinker-NER中几个关键词怎么理解?

    对于一个text中含有n个token的情况

    • shaking_matrixn*n的矩阵,若shaking_maxtrix[i][j]=1表示从第i个token到第j个token为一个实体。(实际用到的只有上三角矩阵,以为实体的起始位置一定在结束位置前。)
    • matrix_index:上三角矩阵的坐标,(0,0),(0,1),(0,2)...(0,n-1),(1,1),(1,2)...(1,n-1)...(n-1,n-1)
    • shaking_index:上三角矩阵的索引,长度为$\frac{n(n+1)}{2}$,即[0,1,2,...,n(n+1)/2 - 1]
    • shaking_ind2matrix_ind:将索引映射到矩阵坐标,即[(0,0),(0,1),...,(n-1,n-1)]
    • matrix_ind2shaking_ind:将坐标映射到索引,即
      [[0, 1, 2,    ...,        n-1],
      [0, n, n+1, n+2,  ...,  2n-2]
      ...
      [0, 0, 0, ...,  n(n+1)/2 - 1]]
      
    • spot:一个实体对应的起止span和类型id,例如实体“北京”在矩阵中起始位置在7,终止位置在9,类型为LOC"(id:3),那么其对应spot为(7, 9, 3)。

致谢

Owner
GodK
GodK
[KBS] Aspect-based sentiment analysis via affective knowledge enhanced graph convolutional networks

#Sentic GCN Introduction This repository was used in our paper: Aspect-Based Sentiment Analysis via Affective Knowledge Enhanced Graph Convolutional N

Akuchi 35 Nov 16, 2022
Text to speech converter with GUI made in Python.

Text-to-speech-with-GUI Text to speech converter with GUI made in Python. To run this download the zip file and run the main file or clone this repo.

SidTheMiner 1 Nov 15, 2021
An open source library for deep learning end-to-end dialog systems and chatbots.

DeepPavlov is an open-source conversational AI library built on TensorFlow, Keras and PyTorch. DeepPavlov is designed for development of production re

Neural Networks and Deep Learning lab, MIPT 6k Dec 30, 2022
Large-scale pretraining for dialogue

A State-of-the-Art Large-scale Pretrained Response Generation Model (DialoGPT) This repository contains the source code and trained model for a large-

Microsoft 1.8k Jan 07, 2023
BERT Attention Analysis

BERT Attention Analysis This repository contains code for What Does BERT Look At? An Analysis of BERT's Attention. It includes code for getting attent

Kevin Clark 401 Dec 11, 2022
PyTorch original implementation of Cross-lingual Language Model Pretraining.

XLM NEW: Added XLM-R model. PyTorch original implementation of Cross-lingual Language Model Pretraining. Includes: Monolingual language model pretrain

Facebook Research 2.7k Dec 27, 2022
An attempt to map the areas with active conflict in Ukraine using open source twitter data.

Live Action Map (LAM) An attempt to use open source data on Twitter to map areas with active conflict. Right now it is used for the Ukraine-Russia con

Kinshuk Dua 171 Nov 21, 2022
A BERT-based reverse dictionary of Korean proverbs

Wisdomify A BERT-based reverse-dictionary of Korean proverbs. 김유빈 : 모델링 / 데이터 수집 / 프로젝트 설계 / back-end 김종윤 : 데이터 수집 / 프로젝트 설계 / front-end / back-end 임용

94 Dec 08, 2022
Code repository of the paper Neural circuit policies enabling auditable autonomy published in Nature Machine Intelligence

Code repository of the paper Neural circuit policies enabling auditable autonomy published in Nature Machine Intelligence

9 Jan 08, 2023
Two-stage text summarization with BERT and BART

Two-Stage Text Summarization Description We experiment with a 2-stage summarization model on CNN/DailyMail dataset that combines the ability to filter

Yukai Yang (Alexis) 6 Oct 22, 2022
The source code of "Language Models are Few-shot Multilingual Learners" (MRL @ EMNLP 2021)

Language Models are Few-shot Multilingual Learners Paper This is the source code of the paper [Arxiv] [ACL Anthology]: This code has been written usin

Genta Indra Winata 45 Nov 21, 2022
CJK computer science terms comparison / 中日韓電腦科學術語對照 / 日中韓のコンピュータ科学の用語対照 / 한·중·일 전산학 용어 대조

CJK computer science terms comparison This repository contains the source code of the website. You can see the website from the following link: Englis

Hong Minhee (洪 民憙) 88 Dec 23, 2022
Beyond Paragraphs: NLP for Long Sequences

Beyond Paragraphs: NLP for Long Sequences

AI2 338 Dec 02, 2022
A Python/Pytorch app for easily synthesising human voices

Voice Cloning App A Python/Pytorch app for easily synthesising human voices Documentation Discord Server Video guide Voice Sharing Hub FAQ's System Re

Ben Andrew 840 Jan 04, 2023
Explore different way to mix speech model(wav2vec2, hubert) and nlp model(BART,T5,GPT) together

SpeechMix Explore different way to mix speech model(wav2vec2, hubert) and nlp model(BART,T5,GPT) together. Introduction For the same input: from datas

Eric Lam 31 Nov 07, 2022
Arabic speech recognition, classification and text-to-speech.

klaam Arabic speech recognition, classification and text-to-speech using many advanced models like wave2vec and fastspeech2. This repository allows tr

ARBML 177 Dec 27, 2022
A Japanese tokenizer based on recurrent neural networks

Nagisa is a python module for Japanese word segmentation/POS-tagging. It is designed to be a simple and easy-to-use tool. This tool has the following

325 Jan 05, 2023
Russian words synonyms and antonyms

ru_synonyms Russian words synonyms and antonyms. Install pip install git+https://github.com/ahmados/rusynonyms.git Usage from ru_synonyms import Anto

sumekenov 7 Dec 14, 2022
(ACL 2022) The source code for the paper "Towards Abstractive Grounded Summarization of Podcast Transcripts"

Towards Abstractive Grounded Summarization of Podcast Transcripts We provide the source code for the paper "Towards Abstractive Grounded Summarization

10 Jul 01, 2022