Code for EMNLP 2021 paper: "Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training"

Overview

SCAPT-ABSA

Code for EMNLP2021 paper: "Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training"

Overview

In this repository, we provide code for Superived ContrAstive Pre-Training (SCAPT) and aspect-aware fine-tuning, retrieved sentiment corpora from YELP/Amazon reviews, and SemEval2014 Restaurant/Laptop with addtional implicit_sentiment labeling.

SCAPT aims to tackle implicit sentiments expression in aspect-based sentiment analysis(ABSA). In our work, we define implicit sentiment as sentiment expressions that contain no polarity markers but still convey clear human-aware sentiment polarity.

Here are examples for explicit and implicit sentiment in ABSA:

examples

SCAPT

SCAPT gives an aligned representation of sentiment expressions with the same sentiment label, which consists of three objectives:

  • Supervised Contrastive Learning (SCL)
  • Review Reconstruction (RR)
  • Masked Aspect Prediction (MAP)
SCAPT

Aspect-aware Fine-tuning

Sentiment representation and aspect-based representation are taken into account for sentiment prediction in aspect-aware fine-tuning.

Aspect_fine-tuning

Requirement

  • cuda 11.0
  • python 3.7.9
    • lxml 4.6.2
    • numpy 1.19.2
    • pytorch 1.8.0
    • pyyaml 5.3.1
    • tqdm 4.55.0
    • transformers 4.2.2

Data Preparation & Preprocessing

For Pre-training

Retrieved sentiment corpora contain millions-level reviews, we provide download links for original corpora and preprocessed data. Download if you want to do pre-training and further use them:

File Google Drive Link Baidu Wangpan Link Baidu Wangpan Code
scapt_yelp_json.zip link link q7fs
scapt_amazon_json.zip link link i1da
scapt_yelp_pkl.zip link link j9ce
scapt_amazon_pkl.zip link link 3b8t

These pickle files can also be generated from json files by the preprocessing method:

bash preprocess.py --pretrain

For Fine-tuning

We have already combined the opinion term labeling to the original SemEval2014 datasets. For example:

    <sentence id="1634">
        <text>The food is uniformly exceptional, with a very capable kitchen which will proudly whip up whatever you feel like eating, whether it's on the menu or not.</text>
        <aspectTerms>
            <aspectTerm term="food" polarity="positive" from="4" to="8" implicit_sentiment="False" opinion_words="exceptional"/>
            <aspectTerm term="kitchen" polarity="positive" from="55" to="62" implicit_sentiment="False" opinion_words="capable"/>
            <aspectTerm term="menu" polarity="neutral" from="141" to="145" implicit_sentiment="True"/>
        </aspectTerms>
        <aspectCategories>
            <aspectCategory category="food" polarity="positive"/>
        </aspectCategories>
    </sentence>

implicit_sentiment indicates whether it is an implicit sentiment expression and yield opinion_words if not implicit. The opinion_words lebaling is credited to TOWE.

Both original and extended fine-tuning data and preprocessed dumps are uploaded to this repository.

Consequently, the structure of your data directory should be:

├── Amazon
│   ├── amazon_laptops.json
│   └── amazon_laptops_preprocess_pretrain.pkl
├── laptops
│   ├── Laptops_Test_Gold_Implicit_Labeled_preprocess_finetune.pkl
│   ├── Laptops_Test_Gold_Implicit_Labeled.xml
│   ├── Laptops_Test_Gold.xml
│   ├── Laptops_Train_v2_Implicit_Labeled_preprocess_finetune.pkl
│   ├── Laptops_Train_v2_Implicit_Labeled.xml
│   └── Laptops_Train_v2.xml
├── MAMS
│   ├── test_preprocess_finetune.pkl
│   ├── test.xml
│   ├── train_preprocess_finetune.pkl
│   ├── train.xml
│   ├── val_preprocess_finetune.pkl
│   └── val.xml
├── restaurants
│   ├── Restaurants_Test_Gold_Implicit_Labeled_preprocess_finetune.pkl
│   ├── Restaurants_Test_Gold_Implicit_Labeled.xml
│   ├── Restaurants_Test_Gold.xml
│   ├── Restaurants_Train_v2_Implicit_Labeled_preprocess_finetune.pkl
│   ├── Restaurants_Train_v2_Implicit_Labeled.xml
│   └── Restaurants_Train_v2.xml
└── YELP
    ├── yelp_restaurants.json
    └── yelp_restaurants_preprocess_pretrain.pkl

Pre-training

The pre-training is conducted on multiple GPUs.

  • Pre-training [TransEnc|BERT] on [YELP|Amazon]:

    python -m torch.distributed.launch --nproc_per_node=${THE_CARD_NUM_YOU_HAVE} multi_card_train.py --config config/[yelp|amazon]_[TransEnc|BERT]_pretrain.yml

Model checkpoints are saved in results.

Fine-tuning

  • Directly train [TransEnc|BERT] on [Restaurants|Laptops|MAMS] As [TransEncAsp|BERTAsp]:

    python train.py --config config/[restaurants|laptops|mams]_[TransEnc|BERT]_finetune.yml
  • Fine-tune the pre-trained [TransEnc|BERT] on [Restaurants|Laptops|MAMS] As [TransEncAsp+SCAPT|BERTAsp+SCAPT]:

    python train.py --config config/[restaurants|laptops|mams]_[TransEnc|BERT]_finetune.yml --checkpoint PATH/TO/MODEL_CHECKPOINT

Model checkpoints are saved in results.

Evaluation

  • Evaluate [TransEnc|BERT]-based model on [Restaurants|Laptops|MAMS] dataset:

    python evaluate.py --config config/[restaurants|laptops|mams]_[TransEnc|BERT]_finetune.yml --checkpoint PATH/TO/MODEL_CHECKPOINT

Our model parameters:

Model Dataset File Google Drive Link Baidu Wangpan Link Baidu Wangpan Code
TransEncAsp+SCAPT SemEval2014 Restaurant TransEnc_restaurants.zip link link 5e5c
TransEncAsp+SCAPT SemEval2014 Laptop TransEnc_laptops.zip link link 8amq
TransEncAsp+SCAPT MAMS TransEnc_MAMS.zip link link bf2x
BERTAsp+SCAPT SemEval2014 Restaurant BERT_restaurants.zip link link 1w2e
BERTAsp+SCAPT SemEval2014 Laptop BERT_laptops.zip link link zhte
BERTAsp+SCAPT MAMS BERT_MAMS.zip link link 1iva

Citation

If you found this repository useful, please cite our paper:

@inproceedings{li-etal-2021-learning-implicit,
    title = "Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training",
    author = "Li, Zhengyan  and
      Zou, Yicheng  and
      Zhang, Chong  and
      Zhang, Qi  and
      Wei, Zhongyu",
    booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2021",
    address = "Online and Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.emnlp-main.22",
    pages = "246--256",
    abstract = "Aspect-based sentiment analysis aims to identify the sentiment polarity of a specific aspect in product reviews. We notice that about 30{\%} of reviews do not contain obvious opinion words, but still convey clear human-aware sentiment orientation, which is known as implicit sentiment. However, recent neural network-based approaches paid little attention to implicit sentiment entailed in the reviews. To overcome this issue, we adopt Supervised Contrastive Pre-training on large-scale sentiment-annotated corpora retrieved from in-domain language resources. By aligning the representation of implicit sentiment expressions to those with the same sentiment label, the pre-training process leads to better capture of both implicit and explicit sentiment orientation towards aspects in reviews. Experimental results show that our method achieves state-of-the-art performance on SemEval2014 benchmarks, and comprehensive analysis validates its effectiveness on learning implicit sentiment.",
}
Owner
Zhengyan Li
Zhengyan Li
PyTorch implementation of Constrained Policy Optimization

PyTorch implementation of Constrained Policy Optimization (CPO) This repository has a simple to understand and use implementation of CPO in PyTorch. A

Sapana Chaudhary 25 Dec 08, 2022
95.47% on CIFAR10 with PyTorch

Train CIFAR10 with PyTorch I'm playing with PyTorch on the CIFAR10 dataset. Prerequisites Python 3.6+ PyTorch 1.0+ Training # Start training with: py

5k Dec 30, 2022
N-RPG - Novel role playing game da turfu

N-RPG Ce README sera la page de garde du projet. Contenu Il contiendra la présen

4 Mar 15, 2022
A Nim frontend for pytorch, aiming to be mostly auto-generated and internally using ATen.

Master Release Pytorch - Py + Nim A Nim frontend for pytorch, aiming to be mostly auto-generated and internally using ATen. Because Nim compiles to C+

Giovanni Petrantoni 425 Dec 22, 2022
Lex Rosetta: Transfer of Predictive Models Across Languages, Jurisdictions, and Legal Domains

Lex Rosetta: Transfer of Predictive Models Across Languages, Jurisdictions, and Legal Domains This is an accompanying repository to the ICAIL 2021 pap

4 Dec 16, 2021
This is an example of object detection on Micro bacterium tuberculosis using Mask-RCNN

Mask-RCNN on Mycobacterium tuberculosis This is an example of object detection on Mycobacterium Tuberculosis using Mask RCNN. Implement of Mask R-CNN

Jun-En Ding 1 Sep 16, 2021
DCT-Mask: Discrete Cosine Transform Mask Representation for Instance Segmentation

DCT-Mask: Discrete Cosine Transform Mask Representation for Instance Segmentation This project hosts the code for implementing the DCT-MASK algorithms

Alibaba Cloud 57 Nov 27, 2022
Image inpainting using Gaussian Mixture Models

dmfa_inpainting Source code for: MisConv: Convolutional Neural Networks for Missing Data (to be published at WACV 2022) Estimating conditional density

Marcin Przewięźlikowski 8 Oct 09, 2022
[ECCV 2020] XingGAN for Person Image Generation

Contents XingGAN or CrossingGAN Installation Dataset Preparation Generating Images Using Pretrained Model Train and Test New Models Evaluation Acknowl

Hao Tang 218 Oct 29, 2022
This is the official PyTorch implementation for "Mesa: A Memory-saving Training Framework for Transformers".

Mesa: A Memory-saving Training Framework for Transformers This is the official PyTorch implementation for Mesa: A Memory-saving Training Framework for

Zhuang AI Group 105 Dec 06, 2022
Minimal diffusion models - Minimal code and simple experiments to play with Denoising Diffusion Probabilistic Models (DDPMs)

Minimal code and simple experiments to play with Denoising Diffusion Probabilist

Rithesh Kumar 16 Oct 06, 2022
A more easy-to-use implementation of KPConv

A more easy-to-use implementation of KPConv This repo contains a more easy-to-use implementation of KPConv based on PyTorch. Introduction KPConv is a

Zheng Qin 35 Dec 14, 2022
This repository contains the code for the paper ``Identifiable VAEs via Sparse Decoding''.

Sparse VAE This repository contains the code for the paper ``Identifiable VAEs via Sparse Decoding''. Data Sources The datasets used in this paper wer

Gemma Moran 17 Dec 12, 2022
The code for "Deep Level Set for Box-supervised Instance Segmentation in Aerial Images".

Deep Levelset for Box-supervised Instance Segmentation in Aerial Images Wentong Li, Yijie Chen, Wenyu Liu, Jianke Zhu* This code is based on MMdetecti

sunshine.lwt 112 Jan 05, 2023
Self-supervised learning on Graph Representation Learning (node-level task)

graph_SSL Self-supervised learning on Graph Representation Learning (node-level task) How to run the code To run GRACE, sh run_GRACE.sh To run GCA, sh

Namkyeong Lee 3 Dec 31, 2021
🎯 A comprehensive gradient-free optimization framework written in Python

Solid is a Python framework for gradient-free optimization. It contains basic versions of many of the most common optimization algorithms that do not

Devin Soni 565 Dec 26, 2022
Pytorch implementation for DFN: Distributed Feedback Network for Single-Image Deraining.

DFN:Distributed Feedback Network for Single-Image Deraining Abstract Recently, deep convolutional neural networks have achieved great success for sing

6 Nov 05, 2022
Official Code for AdvRush: Searching for Adversarially Robust Neural Architectures (ICCV '21)

AdvRush Official Code for AdvRush: Searching for Adversarially Robust Neural Architectures (ICCV '21) Environmental Set-up Python == 3.6.12, PyTorch =

11 Dec 10, 2022
CLOCs: Camera-LiDAR Object Candidates Fusion for 3D Object Detection

CLOCs is a novel Camera-LiDAR Object Candidates fusion network. It provides a low-complexity multi-modal fusion framework that improves the performance of single-modality detectors. CLOCs operates on

Su Pang 254 Dec 16, 2022
A particular navigation route using satellite feed and can help in toll operations & traffic managemen

How about adding some info that can quanitfy the stress on a particular navigation route using satellite feed and can help in toll operations & traffic management The current analysis is on the satel

Ashish Pandey 1 Feb 14, 2022