Code for EMNLP 2021 paper: "Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training"

Overview

SCAPT-ABSA

Code for EMNLP2021 paper: "Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training"

Overview

In this repository, we provide code for Superived ContrAstive Pre-Training (SCAPT) and aspect-aware fine-tuning, retrieved sentiment corpora from YELP/Amazon reviews, and SemEval2014 Restaurant/Laptop with addtional implicit_sentiment labeling.

SCAPT aims to tackle implicit sentiments expression in aspect-based sentiment analysis(ABSA). In our work, we define implicit sentiment as sentiment expressions that contain no polarity markers but still convey clear human-aware sentiment polarity.

Here are examples for explicit and implicit sentiment in ABSA:

examples

SCAPT

SCAPT gives an aligned representation of sentiment expressions with the same sentiment label, which consists of three objectives:

  • Supervised Contrastive Learning (SCL)
  • Review Reconstruction (RR)
  • Masked Aspect Prediction (MAP)
SCAPT

Aspect-aware Fine-tuning

Sentiment representation and aspect-based representation are taken into account for sentiment prediction in aspect-aware fine-tuning.

Aspect_fine-tuning

Requirement

  • cuda 11.0
  • python 3.7.9
    • lxml 4.6.2
    • numpy 1.19.2
    • pytorch 1.8.0
    • pyyaml 5.3.1
    • tqdm 4.55.0
    • transformers 4.2.2

Data Preparation & Preprocessing

For Pre-training

Retrieved sentiment corpora contain millions-level reviews, we provide download links for original corpora and preprocessed data. Download if you want to do pre-training and further use them:

File Google Drive Link Baidu Wangpan Link Baidu Wangpan Code
scapt_yelp_json.zip link link q7fs
scapt_amazon_json.zip link link i1da
scapt_yelp_pkl.zip link link j9ce
scapt_amazon_pkl.zip link link 3b8t

These pickle files can also be generated from json files by the preprocessing method:

bash preprocess.py --pretrain

For Fine-tuning

We have already combined the opinion term labeling to the original SemEval2014 datasets. For example:

    <sentence id="1634">
        <text>The food is uniformly exceptional, with a very capable kitchen which will proudly whip up whatever you feel like eating, whether it's on the menu or not.</text>
        <aspectTerms>
            <aspectTerm term="food" polarity="positive" from="4" to="8" implicit_sentiment="False" opinion_words="exceptional"/>
            <aspectTerm term="kitchen" polarity="positive" from="55" to="62" implicit_sentiment="False" opinion_words="capable"/>
            <aspectTerm term="menu" polarity="neutral" from="141" to="145" implicit_sentiment="True"/>
        </aspectTerms>
        <aspectCategories>
            <aspectCategory category="food" polarity="positive"/>
        </aspectCategories>
    </sentence>

implicit_sentiment indicates whether it is an implicit sentiment expression and yield opinion_words if not implicit. The opinion_words lebaling is credited to TOWE.

Both original and extended fine-tuning data and preprocessed dumps are uploaded to this repository.

Consequently, the structure of your data directory should be:

├── Amazon
│   ├── amazon_laptops.json
│   └── amazon_laptops_preprocess_pretrain.pkl
├── laptops
│   ├── Laptops_Test_Gold_Implicit_Labeled_preprocess_finetune.pkl
│   ├── Laptops_Test_Gold_Implicit_Labeled.xml
│   ├── Laptops_Test_Gold.xml
│   ├── Laptops_Train_v2_Implicit_Labeled_preprocess_finetune.pkl
│   ├── Laptops_Train_v2_Implicit_Labeled.xml
│   └── Laptops_Train_v2.xml
├── MAMS
│   ├── test_preprocess_finetune.pkl
│   ├── test.xml
│   ├── train_preprocess_finetune.pkl
│   ├── train.xml
│   ├── val_preprocess_finetune.pkl
│   └── val.xml
├── restaurants
│   ├── Restaurants_Test_Gold_Implicit_Labeled_preprocess_finetune.pkl
│   ├── Restaurants_Test_Gold_Implicit_Labeled.xml
│   ├── Restaurants_Test_Gold.xml
│   ├── Restaurants_Train_v2_Implicit_Labeled_preprocess_finetune.pkl
│   ├── Restaurants_Train_v2_Implicit_Labeled.xml
│   └── Restaurants_Train_v2.xml
└── YELP
    ├── yelp_restaurants.json
    └── yelp_restaurants_preprocess_pretrain.pkl

Pre-training

The pre-training is conducted on multiple GPUs.

  • Pre-training [TransEnc|BERT] on [YELP|Amazon]:

    python -m torch.distributed.launch --nproc_per_node=${THE_CARD_NUM_YOU_HAVE} multi_card_train.py --config config/[yelp|amazon]_[TransEnc|BERT]_pretrain.yml

Model checkpoints are saved in results.

Fine-tuning

  • Directly train [TransEnc|BERT] on [Restaurants|Laptops|MAMS] As [TransEncAsp|BERTAsp]:

    python train.py --config config/[restaurants|laptops|mams]_[TransEnc|BERT]_finetune.yml
  • Fine-tune the pre-trained [TransEnc|BERT] on [Restaurants|Laptops|MAMS] As [TransEncAsp+SCAPT|BERTAsp+SCAPT]:

    python train.py --config config/[restaurants|laptops|mams]_[TransEnc|BERT]_finetune.yml --checkpoint PATH/TO/MODEL_CHECKPOINT

Model checkpoints are saved in results.

Evaluation

  • Evaluate [TransEnc|BERT]-based model on [Restaurants|Laptops|MAMS] dataset:

    python evaluate.py --config config/[restaurants|laptops|mams]_[TransEnc|BERT]_finetune.yml --checkpoint PATH/TO/MODEL_CHECKPOINT

Our model parameters:

Model Dataset File Google Drive Link Baidu Wangpan Link Baidu Wangpan Code
TransEncAsp+SCAPT SemEval2014 Restaurant TransEnc_restaurants.zip link link 5e5c
TransEncAsp+SCAPT SemEval2014 Laptop TransEnc_laptops.zip link link 8amq
TransEncAsp+SCAPT MAMS TransEnc_MAMS.zip link link bf2x
BERTAsp+SCAPT SemEval2014 Restaurant BERT_restaurants.zip link link 1w2e
BERTAsp+SCAPT SemEval2014 Laptop BERT_laptops.zip link link zhte
BERTAsp+SCAPT MAMS BERT_MAMS.zip link link 1iva

Citation

If you found this repository useful, please cite our paper:

@inproceedings{li-etal-2021-learning-implicit,
    title = "Learning Implicit Sentiment in Aspect-based Sentiment Analysis with Supervised Contrastive Pre-Training",
    author = "Li, Zhengyan  and
      Zou, Yicheng  and
      Zhang, Chong  and
      Zhang, Qi  and
      Wei, Zhongyu",
    booktitle = "Proceedings of the 2021 Conference on Empirical Methods in Natural Language Processing",
    month = nov,
    year = "2021",
    address = "Online and Punta Cana, Dominican Republic",
    publisher = "Association for Computational Linguistics",
    url = "https://aclanthology.org/2021.emnlp-main.22",
    pages = "246--256",
    abstract = "Aspect-based sentiment analysis aims to identify the sentiment polarity of a specific aspect in product reviews. We notice that about 30{\%} of reviews do not contain obvious opinion words, but still convey clear human-aware sentiment orientation, which is known as implicit sentiment. However, recent neural network-based approaches paid little attention to implicit sentiment entailed in the reviews. To overcome this issue, we adopt Supervised Contrastive Pre-training on large-scale sentiment-annotated corpora retrieved from in-domain language resources. By aligning the representation of implicit sentiment expressions to those with the same sentiment label, the pre-training process leads to better capture of both implicit and explicit sentiment orientation towards aspects in reviews. Experimental results show that our method achieves state-of-the-art performance on SemEval2014 benchmarks, and comprehensive analysis validates its effectiveness on learning implicit sentiment.",
}
Owner
Zhengyan Li
Zhengyan Li
CVNets: A library for training computer vision networks

CVNets: A library for training computer vision networks This repository contains the source code for training computer vision models. Specifically, it

Apple 1.1k Jan 03, 2023
Transformer based SAR image despeckling

Transformer based SAR image despeckling Using the code: The code is stable while using Python 3.6.13, CUDA =10.1 Clone this repository: git clone htt

27 Nov 13, 2022
Official repository for Jia, Raghunathan, Göksel, and Liang, "Certified Robustness to Adversarial Word Substitutions" (EMNLP 2019)

Certified Robustness to Adversarial Word Substitutions This is the official GitHub repository for the following paper: Certified Robustness to Adversa

Robin Jia 38 Oct 16, 2022
Python port of R's Comprehensive Dynamic Time Warp algorithm package

Welcome to the dtw-python package Comprehensive implementation of Dynamic Time Warping algorithms. DTW is a family of algorithms which compute the loc

Dynamic Time Warping algorithms 154 Dec 26, 2022
Fuzzing tool (TFuzz): a fuzzing tool based on program transformation

T-Fuzz T-Fuzz consists of 2 components: Fuzzing tool (TFuzz): a fuzzing tool based on program transformation Crash Analyzer (CrashAnalyzer): a tool th

HexHive 244 Nov 09, 2022
Sematic-Segmantation - Semantic Segmentation on MIT ADE20K dataset in PyTorch

Semantic Segmentation on MIT ADE20K dataset in PyTorch This is a PyTorch impleme

Berat Eren Terzioğlu 4 Mar 22, 2022
Aiming at the common training datsets split, spectrum preprocessing, wavelength select and calibration models algorithm involved in the spectral analysis process

Aiming at the common training datsets split, spectrum preprocessing, wavelength select and calibration models algorithm involved in the spectral analysis process, a complete algorithm library is esta

Fu Pengyou 50 Jan 07, 2023
GB-CosFace: Rethinking Softmax-based Face Recognition from the Perspective of Open Set Classification

GB-CosFace: Rethinking Softmax-based Face Recognition from the Perspective of Open Set Classification This is the official pytorch implementation of t

Alibaba Cloud 5 Nov 14, 2022
BERT model training impelmentation using 1024 A100 GPUs for MLPerf Training v1.1

Pre-trained checkpoint and bert config json file Location of checkpoint and bert config json file This MLCommons members Google Drive location contain

SAIT (Samsung Advanced Institute of Technology) 12 Apr 27, 2022
SPCL: A New Framework for Domain Adaptive Semantic Segmentation via Semantic Prototype-based Contrastive Learning

SPCL SPCL: A New Framework for Domain Adaptive Semantic Segmentation via Semantic Prototype-based Contrastive Learning Update on 2021/11/25: ArXiv Ver

Binhui Xie (谢斌辉) 11 Oct 29, 2022
Code release to accompany paper "Geometry-Aware Gradient Algorithms for Neural Architecture Search."

Geometry-Aware Gradient Algorithms for Neural Architecture Search This repository contains the code required to run the experiments for the DARTS sear

18 May 27, 2022
SAGE: Sensitivity-guided Adaptive Learning Rate for Transformers

SAGE: Sensitivity-guided Adaptive Learning Rate for Transformers This repo contains our codes for the paper "No Parameters Left Behind: Sensitivity Gu

Chen Liang 23 Nov 07, 2022
HairCLIP: Design Your Hair by Text and Reference Image

Overview This repository hosts the official PyTorch implementation of the paper: "HairCLIP: Design Your Hair by Text and Reference Image". Our single

322 Jan 06, 2023
Predict and time series avocado hass

RECOMMENDER SYSTEM MARKETING TỔNG QUAN VỀ HỆ THỐNG DỮ LIỆU 1. Giới thiệu - Tiki là một hệ sinh thái thương mại "all in one", trong đó có tiki.vn, là

hieulmsc 3 Jan 10, 2022
Official code implementation for "Personalized Federated Learning using Hypernetworks"

Personalized Federated Learning using Hypernetworks This is an official implementation of Personalized Federated Learning using Hypernetworks paper. [

Aviv Shamsian 121 Dec 25, 2022
The NEOSSat is a dual-mission microsatellite designed to detect potentially hazardous Earth-orbit-crossing asteroids and track objects that reside in deep space

The NEOSSat is a dual-mission microsatellite designed to detect potentially hazardous Earth-orbit-crossing asteroids and track objects that reside in deep space

John Salib 2 Jan 30, 2022
PowerGridworld: A Framework for Multi-Agent Reinforcement Learning in Power Systems

PowerGridworld provides users with a lightweight, modular, and customizable framework for creating power-systems-focused, multi-agent Gym environments that readily integrate with existing training fr

National Renewable Energy Laboratory 37 Dec 17, 2022
Pytorch reimplementation of the Vision Transformer (An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale)

Vision Transformer Pytorch reimplementation of Google's repository for the ViT model that was released with the paper An Image is Worth 16x16 Words: T

Eunkwang Jeon 1.4k Dec 28, 2022
bio_inspired_min_nets_improve_the_performance_and_robustness_of_deep_networks

Code Submission for: Bio-inspired Min-Nets Improve the Performance and Robustness of Deep Networks Run with docker To build a docker environment, chan

0 Dec 09, 2021
PyTorch Implementation of NCSOFT's FastPitchFormant: Source-filter based Decomposed Modeling for Speech Synthesis

FastPitchFormant - PyTorch Implementation PyTorch Implementation of FastPitchFormant: Source-filter based Decomposed Modeling for Speech Synthesis. Qu

Keon Lee 63 Jan 02, 2023