This is the official source code of "BiCAT: Bi-Chronological Augmentation of Transformer for Sequential Recommendation".

Overview

BiCAT

This is our TensorFlow implementation for the paper: "BiCAT: Sequential Recommendation with Bidirectional Chronological Augmentation of Transformer". Our code is implemented based on Tensorflow version of SASRec and ASReP.

Environment

  • TensorFlow 1.12
  • Python 3.6.*

Datasets Prepare

Benchmarks: Amazon Review datasets Beauty, Movie Lens and Cell_Phones_and_Accessories. The data split is done in the leave-one-out setting. Make sure you download the datasets from the link. Please, use the DataProcessing.py under the data/, and make sure you change the DATASET variable value to your dataset name, then you run:

python DataProcessing.py

You will find the processed dataset in the directory with the name of your input dataset.

Beauty

1. Reversely Pre-training and Short Sequence Augmentation

Pre-train the model and output 20 items for sequences with length <= 20.

python main.py \
       --dataset=Beauty \
       --train_dir=default \
       --lr=0.001 \
       --hidden_units=128 \
       --maxlen=100 \
       --dropout_rate=0.7 \
       --num_blocks=2 \
       --l2_emb=0.0 \
       --num_heads=4 \
       --evalnegsample 100 \
       --reversed 1 \
       --reversed_gen_num 20 \
       --M 20

2. Next-Item Prediction with Reversed-Pre-Trained Model and Augmented dataset

python main.py \
       --dataset=Beauty \
       --train_dir=default \
       --lr=0.001 \
       --hidden_units=128 \
       --maxlen=100 \
       --dropout_rate=0.7 \
       --num_blocks=2 \
       --l2_emb=0.0 \
       --num_heads=4 \
       --evalnegsample 100 \
       --reversed_pretrain 1 \
       --aug_traindata 15 \
       --M 18

Cell_Phones_and_Accessories

1. Reversely Pre-training and Short Sequence Augmentation

Pre-train the model and output 20 items for sequences with length <= 20.

python main.py \
       --dataset=Cell_Phones_and_Accessories \
       --train_dir=default \
       --lr=0.001 \
       --hidden_units=32 \
       --maxlen=100 \
       --dropout_rate=0.5 \
       --num_blocks=2 \
       --l2_emb=0.0 \
       --num_heads=2 \
       --evalnegsample 100 \
       --reversed 1 \
       --reversed_gen_num 20 \
       --M 20

2. Next-Item Prediction with Reversed-Pre-Trained Model and Augmented dataset

python main.py \
       --dataset=Cell_Phones_and_Accessories \
       --train_dir=default \
       --lr=0.001 \
       --hidden_units=32 \
       --maxlen=100 \
       --dropout_rate=0.5 \
       --num_blocks=2 \
       --l2_emb=0.0 \
       --num_heads=2 \
       --evalnegsample 100 \
       --reversed_pretrain 1 \ 
       --aug_traindata 17 \
       --M 18

Citation

@misc{jiang2021sequential,
      title={Sequential Recommendation with Bidirectional Chronological Augmentation of Transformer}, 
      author={Juyong Jiang and Yingtao Luo and Jae Boum Kim and Kai Zhang and Sunghun Kim},
      year={2021},
      eprint={2112.06460},
      archivePrefix={arXiv},
      primaryClass={cs.IR}
}
Owner
John
My research interests are machine learning and recommender systems.
John
Machine learning notebooks in different subjects optimized to run in google collaboratory

Notebooks Name Description Category Link Training pix2pix This notebook shows a simple pipeline for training pix2pix on a simple dataset. Most of the

Zaid Alyafeai 363 Dec 06, 2022
LaBERT - A length-controllable and non-autoregressive image captioning model.

Length-Controllable Image Captioning (ECCV2020) This repo provides the implemetation of the paper Length-Controllable Image Captioning. Install conda

bearcatt 53 Nov 13, 2022
A toolkit for making real world machine learning and data analysis applications in C++

dlib C++ library Dlib is a modern C++ toolkit containing machine learning algorithms and tools for creating complex software in C++ to solve real worl

Davis E. King 11.6k Jan 01, 2023
Hierarchical Motion Encoder-Decoder Network for Trajectory Forecasting (HMNet)

Hierarchical Motion Encoder-Decoder Network for Trajectory Forecasting (HMNet) Our paper: https://arxiv.org/abs/2111.13324 We will release the complet

15 Oct 17, 2022
This repository provides an efficient PyTorch-based library for training deep models.

s3sec Test AWS S3 buckets for read/write/delete access This tool was developed to quickly test a list of s3 buckets for public read, write and delete

Bytedance Inc. 123 Jan 05, 2023
PyTorch implementation of "Image-to-Image Translation Using Conditional Adversarial Networks".

pix2pix-pytorch PyTorch implementation of Image-to-Image Translation Using Conditional Adversarial Networks. Based on pix2pix by Phillip Isola et al.

mrzhu 383 Dec 17, 2022
Synthetic Humans for Action Recognition, IJCV 2021

SURREACT: Synthetic Humans for Action Recognition from Unseen Viewpoints Gül Varol, Ivan Laptev and Cordelia Schmid, Andrew Zisserman, Synthetic Human

Gul Varol 59 Dec 14, 2022
Fast Neural Style for Image Style Transform by Pytorch

FastNeuralStyle by Pytorch Fast Neural Style for Image Style Transform by Pytorch This is famous Fast Neural Style of Paper Perceptual Losses for Real

Bengxy 81 Sep 03, 2022
"NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search".

NAS-Bench-301 This repository containts code for the paper: "NAS-Bench-301 and the Case for Surrogate Benchmarks for Neural Architecture Search". The

AutoML-Freiburg-Hannover 57 Nov 30, 2022
SimpleDepthEstimation - An unified codebase for NN-based monocular depth estimation methods

SimpleDepthEstimation Introduction This is an unified codebase for NN-based monocular depth estimation methods, the framework is based on detectron2 (

8 Dec 13, 2022
This library is a location of the LegacyLogger for PyTorch Lightning.

neptune-contrib Documentation See neptune-contrib documentation site Installation Get prerequisites python versions 3.5.6/3.6 are supported Install li

neptune.ai 26 Oct 07, 2021
Keras Implementation of Neural Style Transfer from the paper "A Neural Algorithm of Artistic Style"

Neural Style Transfer & Neural Doodles Implementation of Neural Style Transfer from the paper A Neural Algorithm of Artistic Style in Keras 2.0+ INetw

Somshubra Majumdar 2.2k Dec 31, 2022
A basic neural network for image segmentation.

Unet_erythema_detection A basic neural network for image segmentation. 前期准备 1.在logs文件夹中下载h5权重文件,百度网盘链接在logs文件夹中 2.将所有原图 放置在“/dataset_1/JPEGImages/”文件夹

1 Jan 16, 2022
Pytorch code for our paper "Feedback Network for Image Super-Resolution" (CVPR2019)

Feedback Network for Image Super-Resolution [arXiv] [CVF] [Poster] Update: Our proposed Gated Multiple Feedback Network (GMFN) will appear in BMVC2019

Zhen Li 539 Jan 06, 2023
A Python library for common tasks on 3D point clouds

Point Cloud Utils (pcu) - A Python library for common tasks on 3D point clouds Point Cloud Utils (pcu) is a utility library providing the following fu

Francis Williams 622 Dec 27, 2022
Robust fine-tuning of zero-shot models

Robust fine-tuning of zero-shot models This repository contains code for the paper Robust fine-tuning of zero-shot models by Mitchell Wortsman*, Gabri

224 Dec 29, 2022
Pytorch implementation of the DeepDream computer vision algorithm

deep-dream-in-pytorch Pytorch (https://github.com/pytorch/pytorch) implementation of the deep dream (https://en.wikipedia.org/wiki/DeepDream) computer

102 Dec 05, 2022
This is the implementation of GGHL (A General Gaussian Heatmap Labeling for Arbitrary-Oriented Object Detection)

GGHL: A General Gaussian Heatmap Labeling for Arbitrary-Oriented Object Detection This is the implementation of GGHL 👋 👋 👋 [Arxiv] [Google Drive][B

551 Dec 31, 2022
PyTorch Implementation of Vector Quantized Variational AutoEncoders.

Pytorch implementation of VQVAE. This paper combines 2 tricks: Vector Quantization (check out this amazing blog for better understanding.) Straight-Th

Vrushank Changawala 2 Oct 06, 2021
Repository relating to the CVPR21 paper TimeLens: Event-based Video Frame Interpolation

TimeLens: Event-based Video Frame Interpolation This repository is about the High Speed Event and RGB (HS-ERGB) dataset, used in the 2021 CVPR paper T

Robotics and Perception Group 544 Dec 19, 2022