Source code for Transformer-based Multi-task Learning for Disaster Tweet Categorisation (UCD's participation in TREC-IS 2020A, 2020B and 2021A).

Overview

Source code for "UCD participation in TREC-IS 2020A, 2020B and 2021A".

*** update at: 2021/05/25

This repo so far relates to the following work:

  • Transformer-based Multi-task Learning for Disaster Tweet Categorisation, (WiP paper, ISCRAM 2021)
  • Multi-task transfer learning for finding actionable information from crisis-related messages on social media, (paper, TREC 2020)

Setup

git clone https://github.com/wangcongcong123/crisis-mtl.git
pip install -r requirements.txt

Dataset preparation

  • Download the splits prepared for the system from here that contains three subdirectories for 2020a, 2020b and 2021a respectively.
  • Unzip the file to data/.

Training and submitting

# for 2020a
python run.py --dataset_name 2020a --model_name bert-base-uncased

# or for 2020b
python run.py --edition 2020b --model_name bert-base-uncased
python run.py --edition 2020b --model_name google/electra-base-discriminator
python run.py --edition 2020b --model_name microsoft/deberta-base
python run.py --edition 2020b --model_name distilbert-base-uncased
python submit_ensemble.py --edition 2020b


# or for 2021a
python run.py --edition 2021a --model_name bert-base-uncased
python run.py --edition 2021a --model_name google/electra-base-discriminator
python run.py --edition 2021a --model_name microsoft/deberta-base
python run.py --edition 2021a --model_name distilbert-base-uncased
python submit_ensemble.py --edition 2021a

To see our results compared to other participating runs in 2020a and 2020b, check the appendix of this overview paper. To know the details of our approach, check this ISCRAM-2021 paper on 2020a and this TREC-2020 paper on 2020b. The evaluation for 2021a is still in process so the results will be added as soon as they come out.

Citation

If you use the code in your research, please consider citing the following papers:

@article{wang2021,
author = {Wang, Congcong and Nulty, Paul and Lillis, David},
journal = {Proceedings of the International ISCRAM Conference},
keywords = {18th International Conference on Information Systems for Crisis Response and Management (ISCRAM 2021)},
number = {May},
title = {{Transformer-based Multi-task Learning for Disaster Tweet Categorisation}},
volume = {2021-May},
year = {2021}
}

@inproceedings{congcong2020multi,
 address = {Gaithersburg, MD},
 title = {Multi-task transfer learning for finding actionable information from crisis-related messages on social media},
 booktitle = {Proceedings of the Twenty-Nineth {{Text REtrieval Conference}} ({{TREC}} 2020)},
 author = {Wang, Congcong and Lillis, David},
 year = {2020},
}

Queries

Let me know if any questions via [email protected] or through creating an issue.

Owner
Congcong Wang
Ph.D [email protected], Crisis on Social Media, NLP, Machine Learning, IR
Congcong Wang
Continual Learning of Electronic Health Records (EHR).

Continual Learning of Longitudinal Health Records Repo for reproducing the experiments in Continual Learning of Longitudinal Health Records (2021). Re

Jacob 7 Oct 21, 2022
Anderson Acceleration for Deep Learning

Anderson Accelerated Deep Learning (AADL) AADL is a Python package that implements the Anderson acceleration to speed-up the training of deep learning

Oak Ridge National Laboratory 7 Nov 24, 2022
Codes and scripts for "Explainable Semantic Space by Grounding Languageto Vision with Cross-Modal Contrastive Learning"

Visually Grounded Bert Language Model This repository is the official implementation of Explainable Semantic Space by Grounding Language to Vision wit

17 Dec 17, 2022
A Dying Light 2 (DL2) PAKFile Utility for Modders and Mod Makers.

Dying Light 2 PAKFile Utility A Dying Light 2 (DL2) PAKFile Utility for Modders and Mod Makers. This tool aims to make PAKFile (.pak files) modding a

RHQ Online 12 Aug 26, 2022
This repository includes the official project for the paper: TransMix: Attend to Mix for Vision Transformers.

TransMix: Attend to Mix for Vision Transformers This repository includes the official project for the paper: TransMix: Attend to Mix for Vision Transf

Jie-Neng Chen 130 Jan 01, 2023
Deep Learning agent of Starcraft2, similar to AlphaStar of DeepMind except size of network.

Introduction This repository is for Deep Learning agent of Starcraft2. It is very similar to AlphaStar of DeepMind except size of network. I only test

Dohyeong Kim 136 Jan 04, 2023
Reproduces the results of the paper "Finite Basis Physics-Informed Neural Networks (FBPINNs): a scalable domain decomposition approach for solving differential equations".

Finite basis physics-informed neural networks (FBPINNs) This repository reproduces the results of the paper Finite Basis Physics-Informed Neural Netwo

Ben Moseley 65 Dec 28, 2022
PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World [ACL 2021]

piglet PIGLeT: Language Grounding Through Neuro-Symbolic Interaction in a 3D World [ACL 2021] This repo contains code and data for PIGLeT. If you like

Rowan Zellers 51 Oct 08, 2022
This is a simple face recognition mini project that was completed by a team of 3 members in 1 week's time

PeekingDuckling 1. Description This is an implementation of facial identification algorithm to detect and identify the faces of the 3 team members Cla

Eric Kwok 2 Jan 25, 2022
Simple tutorials using Google's TensorFlow Framework

TensorFlow-Tutorials Introduction to deep learning based on Google's TensorFlow framework. These tutorials are direct ports of Newmu's Theano Tutorial

Nathan Lintz 6k Jan 06, 2023
Hypercomplex Neural Networks with PyTorch

HyperNets Hypercomplex Neural Networks with PyTorch: this repository would be a container for hypercomplex neural network modules to facilitate resear

Eleonora Grassucci 21 Dec 27, 2022
Prompt-BERT: Prompt makes BERT Better at Sentence Embeddings

Prompt-BERT: Prompt makes BERT Better at Sentence Embeddings Results on STS Tasks Model STS12 STS13 STS14 STS15 STS16 STSb SICK-R Avg. unsup-prompt-be

196 Jan 08, 2023
Pytorch implementation of Integrating Tree Path in Transformer for Code Representation

This is an official Pytorch implementation of the approaches proposed in: Han Peng, Ge Li, Wenhan Wang, Yunfei Zhao, Zhi Jin “Integrating Tree Path in

Han Peng 16 Dec 23, 2022
offical implement of our Lifelong Person Re-Identification via Adaptive Knowledge Accumulation in CVPR2021

LifelongReID Offical implementation of our Lifelong Person Re-Identification via Adaptive Knowledge Accumulation in CVPR2021 by Nan Pu, Wei Chen, Yu L

PeterPu 76 Dec 08, 2022
Unofficial implementation of "Swin Transformer: Hierarchical Vision Transformer using Shifted Windows" (https://arxiv.org/abs/2103.14030)

Swin-Transformer-Tensorflow A direct translation of the official PyTorch implementation of "Swin Transformer: Hierarchical Vision Transformer using Sh

52 Dec 29, 2022
Robust Partial Matching for Person Search in the Wild

APNet for Person Search Introduction This is the code of Robust Partial Matching for Person Search in the Wild accepted in CVPR2020. The Align-to-Part

Yingji Zhong 36 Dec 18, 2022
CR-FIQA: Face Image Quality Assessment by Learning Sample Relative Classifiability

This is the official repository of the paper: CR-FIQA: Face Image Quality Assessment by Learning Sample Relative Classifiability A private copy of the

Fadi Boutros 33 Dec 31, 2022
Buffon’s needle: one of the oldest problems in geometric probability

Buffon-s-Needle Buffon’s needle is one of the oldest problems in geometric proba

3 Feb 18, 2022
(CVPR2021) ClassSR: A General Framework to Accelerate Super-Resolution Networks by Data Characteristic

ClassSR (CVPR2021) ClassSR: A General Framework to Accelerate Super-Resolution Networks by Data Characteristic Paper Authors: Xiangtao Kong, Hengyuan

Xiangtao Kong 308 Jan 05, 2023
[ACM MM 2021] TSA-Net: Tube Self-Attention Network for Action Quality Assessment

Tube Self-Attention Network (TSA-Net) This repository contains the PyTorch implementation for paper TSA-Net: Tube Self-Attention Network for Action Qu

ShunliWang 18 Dec 23, 2022