Source code for paper "Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling", AAAI 2021

Related tags

Deep LearningATLOP
Overview

ATLOP

Code for AAAI 2021 paper Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling.

If you make use of this code in your work, please kindly cite the following paper:

@inproceedings{zhou2021atlop,
	title={Document-Level Relation Extraction with Adaptive Thresholding and Localized Context Pooling},
	author={Zhou, Wenxuan and Huang, Kevin and Ma, Tengyu and Huang, Jing},
	booktitle={Proceedings of the AAAI Conference on Artificial Intelligence},
	year={2021}
}

Requirements

  • Python (tested on 3.7.4)
  • CUDA (tested on 10.2)
  • PyTorch (tested on 1.7.0)
  • Transformers (tested on 3.4.0)
  • numpy (tested on 1.19.4)
  • apex (tested on 0.1)
  • opt-einsum (tested on 3.3.0)
  • wandb
  • ujson
  • tqdm

Dataset

The DocRED dataset can be downloaded following the instructions at link. The CDR and GDA datasets can be obtained following the instructions in edge-oriented graph. The expected structure of files is:

ATLOP
 |-- dataset
 |    |-- docred
 |    |    |-- train_annotated.json        
 |    |    |-- train_distant.json
 |    |    |-- dev.json
 |    |    |-- test.json
 |    |-- cdr
 |    |    |-- train_filter.data
 |    |    |-- dev_filter.data
 |    |    |-- test_filter.data
 |    |-- gda
 |    |    |-- train.data
 |    |    |-- dev.data
 |    |    |-- test.data
 |-- meta
 |    |-- rel2id.json

Training and Evaluation

DocRED

Train the BERT model on DocRED with the following command:

>> sh scripts/run_bert.sh  # for BERT
>> sh scripts/run_roberta.sh  # for RoBERTa

The training loss and evaluation results on the dev set are synced to the wandb dashboard.

The program will generate a test file result.json in the official evaluation format. You can compress and submit it to Colab for the official test score.

CDR and GDA

Train CDA and GDA model with the following command:

>> sh scripts/run_cdr.sh  # for CDR
>> sh scripts/run_gda.sh  # for GDA

The training loss and evaluation results on the dev and test set are synced to the wandb dashboard.

Saving and Evaluating Models

You can save the model by setting the --save_path argument before training. The model correponds to the best dev results will be saved. After that, You can evaluate the saved model by setting the --load_path argument, then the code will skip training and evaluate the saved model on benchmarks. I've also released the trained atlop-bert-base and atlop-roberta models.

Comments
  • The results of ATLOP based on the bert-base-cased model on the DocRED dataset

    The results of ATLOP based on the bert-base-cased model on the DocRED dataset

    Hello, I retrained ATLOP based on the bert-base-cased model on the DocRED dataset. However, the max F1 and F1_ign score on the dev dataset is 58.81 and 57.09, respectively. However, these scores are much lower than the reported score in your paper (61.09, 59.22). Is the default model config correct? My environment is as follows: Best regards

    Python 3.7.8
    PyTorch 1.4.0
    Transformers 3.3.1
    apex 0.1
    opt-einsum 3.3.0
    
    opened by donghaozhang95 11
  • The main purpose of the function: get_label

    The main purpose of the function: get_label

    Hi @wzhouad ,

    Thanks so much for releasing your source code. I only wonder about the main purpose of the function get_label() in the file losses.py in calculating the final loss. Could you please explain it? Thanks for your help!

    opened by angelotran05 5
  • model.py

    model.py

    When I run train.py, there is an err in model.py:

    line 45, in get_hrt e_att.append(attention[i, :, start + offset])
    IndexError: too many indices for tensor of dimension 1

    Thanks.

    opened by qiunlp 5
  • Mention embedding

    Mention embedding

    Hi there, thanks for your nice work. I'm a bit confused that in the function get_hrt(), do you use the embedding of the first subword token as the mention embedding instead of summing up all the wordpieces? So the offset used here is due to the insertion of especial token "*" ? Please correct me if I'm wrong, thanks!

    opened by mk2x15 4
  • about the labels

    about the labels

    I see there a line of code before output the loss that is if labels is not None: labels = [torch.tensor(label) for label in labels] labels = torch.cat(labels, dim=0).to(logits) loss = self.loss_fnt(logits.float(), labels.float()) output = (loss.to(sequence_output),) + output

    and i also tried why sometimes the label could be none??? am I got something wrong?

    opened by ChristopherAmadeusMiao 4
  • The best results of same random seed are different at each time  when I trained the ATLOP

    The best results of same random seed are different at each time when I trained the ATLOP

    Hello I trained the ATLOP with same random seed=66 every time, but the final best result are different. Have you met the same situation before? thank you for your replying.

    opened by Lanyu123 4
  • Any plans to release the codes for CDR?

    Any plans to release the codes for CDR?

    Hello Zhou

    Thank you for releasing the codes of your work. In your paper, it has the experiment results on CDR. I want to reproduce the performance using the CDR dataset on your approach. Do you have any plans to release the codes for CDR?

    opened by mjeensung 4
  • About the process_long_input.py

    About the process_long_input.py

    I got the error, could you help me ? thank you!

    Traceback (most recent call last): File "train.py", line 228, in main() File "train.py", line 216, in main train(args, model, train_features, dev_features, test_features) File "train.py", line 74, in train finetune(train_features, optimizer, args.num_train_epochs, num_steps) File "train.py", line 38, in finetune outputs = model(**inputs) File "D:\Anaconda\envs\pytorch-GPU\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) File "D:\code\ATLOP\model.py", line 95, in forward sequence_output, attention = self.encode(input_ids, attention_mask) File "D:\code\ATLOP\model.py", line 32, in encode sequence_output, attention = process_long_input(self.model, input_ids, attention_mask, start_tokens, end_tokens) File "D:\code\ATLOP\long_seq.py", line 17, in process_long_input output_attentions=True, File "D:\Anaconda\envs\pytorch-GPU\lib\site-packages\torch\nn\modules\module.py", line 727, in _call_impl result = self.forward(*input, **kwargs) TypeError: forward() got an unexpected keyword argument 'output_attentions'

    opened by MingYang1127 3
  • Can you please release trained model?

    Can you please release trained model?

    Hi. Thank you for releasing the codes of your model, it is really helpful.

    However I tried to retrain ATLOP based on the bert-base-cased model on the DocRED dataset but I can't get high result as your result on the paper. And I can't retrain roberta-large model because I don't have strong enough GPU (strongest GPU on Google Colab is V100). So can you please release your trained model. I would be very very happy if you can release your model, and I believe that it can help many other people, too.

    Thank you so much.

    opened by nguyenhuuthuat09 3
  • Where did the

    Where did the "/meta/rel2id.json" come from?

    I only want to use DocRED dataset,and there is only "rel_info.json" in it. Could you please tell me how can I get rel2id.json?I try to rename rel_info.json to rel2id.json but ValueError: invalid literal for int() with base 10: 'headquarters location' occured in File "train.py", line 197, in main train_features = read(train_file, tokenizer, max_seq_length=args.max_seq_length) File "/home/kw/ATLOP/prepro.py", line 56, in read_docred r = int(docred_rel2id[label['r']]) Thanks for your attention,I'm waiting for your reply.

    opened by AQA6666 2
  • How should I be running the Enhanced BERT Baseline model?

    How should I be running the Enhanced BERT Baseline model?

    Hi. I recently tried to run the Enhanced BERT Baseline model (i.e., without adaptive threshold loss and local contextualized pooling) and just wanted to confirm if I'm doing it right.

    Basically, in model.py lines 86-111 (i.e., the forward method) I modified the code so that I don't use rs and changed self.head_extractor and self.tail_extractor to have in_features and out_features accordingly. I did this because I'm assuming that within the get_hrt method, rs is what LOP is since we're using attention there. Modifying the extractors also implies that I'm not concatenating hs and ts with rs.

    After that I changed loss_fnt to be a simple nn.BCEWithLogitsLoss rather than ATLoss. That means I also changed the get_label method within ATLoss to be a function so that I'm not depending on the class.

    Am I doing this right? Or is there another way that I should be implementing it?

    The reason why I'm suspicious as to whether I implemented this correctly or not is because I'm currently running the code on the TACRED dataset rather than the DocRED dataset, and while ATLOP itself shows satisfactory performance the performance of the Enhanced BERT Baseline is much lower.

    Thanks.

    opened by seanswyi 2
  • The usage of the ATLoss

    The usage of the ATLoss

    Thanks for your amazing work! I am very interested in the ATLoss, but there is a little question I want to ask. When using the ATLoss, should we add a no-relation label? For example, there are 26 relation types, the gold labels may contain multiple relation types, but at least one relation type. How to represent the no-relation? Show I create a tensor of size 27 and set the first label 1 or a tensor of size 26 and set all the labels zero? Look forward to your reply. Many Thanks,

    opened by Onion12138 0
  • --save_path issue

    --save_path issue

    I edit the script file and add --save_path followed by the directory. I can't see any saved models after running the script. Could you please explain how to save a model in detail?

    opened by rijukandathil 0
Owner
Wenxuan Zhou
Ph.D. student at University of Southern California
Wenxuan Zhou
PyTorch implementation for View-Guided Point Cloud Completion

PyTorch implementation for View-Guided Point Cloud Completion

22 Jan 04, 2023
Gym environment for FLIPIT: The Game of "Stealthy Takeover"

gym-flipit Gym environment for FLIPIT: The Game of "Stealthy Takeover" invented by Marten van Dijk, Ari Juels, Alina Oprea, and Ronald L. Rivest. Desi

Lisa Oakley 2 Dec 15, 2021
Kaggle | 9th place (part of) solution for the Bristol-Myers Squibb – Molecular Translation challenge

Part of the 9th place solution for the Bristol-Myers Squibb – Molecular Translation challenge translating images containing chemical structures into I

Erdene-Ochir Tuguldur 22 Nov 30, 2022
Learning Open-World Object Proposals without Learning to Classify

Learning Open-World Object Proposals without Learning to Classify Pytorch implementation for "Learning Open-World Object Proposals without Learning to

Dahun Kim 149 Dec 22, 2022
Pytorch implementation for "Implicit Feature Alignment: Learn to Convert Text Recognizer to Text Spotter".

Implicit Feature Alignment: Learn to Convert Text Recognizer to Text Spotter This is a pytorch-based implementation for paper Implicit Feature Alignme

wangtianwei 61 Nov 12, 2022
Demonstration of transfer of knowledge and generalization with distillation

Distilling-the-Knowledge-in-a-Neural-Network This is an implementation of a part of the paper "Distilling the Knowledge in a Neural Network" (https://

26 Nov 25, 2022
Human4D Dataset tools for processing and visualization

HUMAN4D: A Human-Centric Multimodal Dataset for Motions & Immersive Media HUMAN4D constitutes a large and multimodal 4D dataset that contains a variet

tofis 15 Nov 09, 2022
《Rethinking Sptil Dimensions of Vision Trnsformers》(2021)

Rethinking Spatial Dimensions of Vision Transformers Byeongho Heo, Sangdoo Yun, Dongyoon Han, Sanghyuk Chun, Junsuk Choe, Seong Joon Oh | Paper NAVER

NAVER AI 224 Dec 27, 2022
Official repository for the paper F, B, Alpha Matting

FBA Matting Official repository for the paper F, B, Alpha Matting. This paper and project is under heavy revision for peer reviewed publication, and s

Marco Forte 404 Jan 05, 2023
EEGEyeNet is benchmark to evaluate ET prediction based on EEG measurements with an increasing level of difficulty

Introduction EEGEyeNet EEGEyeNet is a benchmark to evaluate ET prediction based on EEG measurements with an increasing level of difficulty. Overview T

Ard Kastrati 23 Dec 22, 2022
Text Generation by Learning from Demonstrations

Text Generation by Learning from Demonstrations The README was last updated on March 7, 2021. The repo is based on fairseq (v0.9.?). Paper arXiv Prere

38 Oct 21, 2022
Time series annotation library.

CrowdCurio Time Series Annotator Library The CrowdCurio Time Series Annotation Library implements classification tasks for time series. Features Suppo

CrowdCurio 51 Sep 15, 2022
Towards Boosting the Accuracy of Non-Latin Scene Text Recognition

Convolutional Recurrent Neural Network + CTCLoss | STAR-Net Code for paper "Towards Boosting the Accuracy of Non-Latin Scene Text Recognition" Depende

Sanjana Gunna 7 Aug 07, 2022
Official Pytorch Implementation of 3DV2021 paper: SAFA: Structure Aware Face Animation.

SAFA: Structure Aware Face Animation (3DV2021) Official Pytorch Implementation of 3DV2021 paper: SAFA: Structure Aware Face Animation. Getting Started

QiulinW 122 Dec 23, 2022
This is an official implementation for "PlaneRecNet".

PlaneRecNet This is an official implementation for PlaneRecNet: A multi-task convolutional neural network provides instance segmentation for piece-wis

yaxu 50 Nov 17, 2022
Multi-Glimpse Network With Python

Multi-Glimpse Network Multi-Glimpse Network: A Robust and Efficient Classification Architecture based on Recurrent Downsampled Attention arXiv Require

9 May 10, 2022
U2-Net: Going Deeper with Nested U-Structure for Salient Object Detection

The code for our newly accepted paper in Pattern Recognition 2020: "U^2-Net: Going Deeper with Nested U-Structure for Salient Object Detection."

Xuebin Qin 6.5k Jan 09, 2023
Focal and Global Knowledge Distillation for Detectors

FGD Paper: Focal and Global Knowledge Distillation for Detectors Install MMDetection and MS COCO2017 Our codes are based on MMDetection. Please follow

Mesopotamia 261 Dec 23, 2022
Official implementations of PSENet, PAN and PAN++.

News (2021/11/03) Paddle implementation of PAN, see Paddle-PANet. Thanks @simplify23. (2021/04/08) PSENet and PAN are included in MMOCR. Introduction

395 Dec 14, 2022
PyTorch implementation of MLP-Mixer

PyTorch implementation of MLP-Mixer MLP-Mixer: an all-MLP architecture composed of alternate token-mixing and channel-mixing operations. The token-mix

Duo Li 33 Nov 27, 2022