2021 credit card consuming recommendation

Overview

2021-credit-card-consuming-recommendation

My implementation and sharing of this contest: https://tbrain.trendmicro.com.tw/Competitions/Details/18. I got rank 9 in the Private Leaderboard.

Run My Implementation

Required libs

matplotlib, numpy, pytorch, and yaml. Versions of them are not restricted as long as they're new enough.

Preprocess

python3 data_to_pkl.py
  • The officially provided csv file should be in data dir.
  • Output pkl file is also in data dir.

Feature Extraction

python3 pkl_to_fea_allow_shorter.py
  • See "作法分享" for detailed description of optional parameters.

Training

python3 train_cv_allow_shorter.py -s save_model_dir
  • -s: where you want to save the trained model.

Inference

Generate model outputs

python3 test_cv_raw_allow_shorter.py model_dir max_len
  • model_dir: directory of the trained model.
  • max_len: max number of month considered for each customer.

Merge model outputs

python3 test_cv_merge_allow_shorter.py n_fold_train
  • n_fold_train: number of folds used for training.

作法分享

以下將介紹本競賽所使用的執行環境、特徵截取、模型設計與訓練。

執行環境

硬體方面,初始時使用 ASUS P2440 UF 筆電,含 i7-8550U CPU 及 MX130 顯示卡,主記憶體擴充至 20 GB;後續使用較多特徵及較長期間的資料時,改為使用 AWS p2.xlarge 機器,含 K80 顯示卡以及約 64 GB 主記憶體。AWS 的經費來源是上一個比賽進入複賽拿到的點數,在打完複賽後還有剩下來的部分。

程式語言為 Python 3,未特別指定版本;函式庫則如本說明前半部所示,其中的 matplotlib 為繪圖觀察用,而 yaml 為儲存模型組態用。

特徵截取(附帶資料觀察)與預測目標

我先將欄位分為兩類,依照「訓練資料欄位說明」的順序,從 shop_tag(消費類別)起至 card_other_txn_amt_pct (其他卡片消費金額佔比)止,因為是從每月每類的消費行為而來,且消費行為必然是變動的,因此列為「時間變化類」;而 masts (婚姻狀態)起至最後為止,因所觀察到的每人的婚姻狀態或教育程度等,在比賽資料所截取的兩年間幾乎都不會變化,故列為「時間不變類」,以節省運算及儲存資源。事實上,在「時間不變類」的欄位當中,平均每人用過的不同狀態,平均約為 1.005 至 1.167 種,最多的則為 3 至 5 種。

時間變化類

對於每人每月的消費紀錄,以如下步驟取特徵

  1. 排序出消費金額前 n 大者,最佳成績中使用的 n 為 13。根據觀察,約 99% 的人,其每月消費類別數在 13 以下。
  2. 取該月時間特徵,為待預測月減去該月,共 1 維。
  3. 該月類別特徵共 49 維,若該月該類別消費金額在該月前 n 名中且金額大於 0 者,其特徵值由名次大到小依次為 n, n-1, n-2, …, 1;前 n 名以外或金額小於等於 0 的類別,其特徵值為 0。
  4. 對於前 n 名的每個類別,無論其消費金額皆取以下特徵,共 22 維:txn_cnt, txn_amt, domestic_offline_cnt, domestic_online_cnt, overseas_offline_cnt, overseas_online_cnt, domestic_offline_amt_pct, domestic_online_amt_pct, overseas_offline_amt_pct, overseas_online_amt_pct, card_*_txn_cnt (* = 1, 2, 4, 6, 10, other), card_*_txn_amt_pct (* = 1, 2, 4, 6, 10, other)。
    • 1, 2, 4, 6, 10, other 為所有消費紀錄中,使用次數最多的前六個卡片編號。
  5. 以上共 1 + 49 + 13 * 22 = 336 維

跨月份的取值方式如下圖所示,其中每個圓角方塊代表每人的一個月份的所有消費紀錄,而 N1 為 20 個月,N2 為 4 組,在範圍內會盡可能的取長或多。另,若該月未有消費紀錄,則忽略該月。

時間變化類取值方式

時間不變類

對於每位客戶,僅使用取值範圍內最後消費當月(N1 範圍內的最後一筆)的金額最大的類別所記載的資料來組成特徵。

使用時,以 masts, gender_code, age, primary_card, slam 各自編成 one-hot encoding 或數值型態後組合,共得 20 維,細節說明如下

  • masts: 含缺值共 4 種狀態,4 維。
  • gender_code: 含缺值共 3 種狀態,3 維。
  • age: 含缺值共 10 種狀態,10 維。
  • primary_card: 沒有缺值,共 2 種狀態,2 維。
  • slam: 數值型態,取 log 後做為特徵,1 維。

此部分亦嘗試過其他特徵,但可能是因為維度較大不易訓練(如 cuorg,含缺值共 35 維),或客戶有可能填寫不實(如 poscd),故未取得較好之結果。

預測目標

共 16 維,代表需要預測的 16 個類別,其中下月金額第一名者為 1,第二名者 0.8,第三名者 0.6,第四名以下有購買者 0.2,未購買者 0。

小結

以上取法經去除輸出全部為 0 (即預測目標月份沒有購買行為)之資料後,共約 102 萬組。

模型設計與訓練

本次比賽使用的模型架構如下圖,主體為 BiLSTM + attention,前後加上適量的 linear layers,其中標色部分為 attention 的做用範圍,最後面的 dense layers 之細部架構則為 (dense 128 + ReLU + dropout 0.1) * 2 + dense 16 + Sigmoid。

模型架構

訓練方式為 5 folds cross validation,預測時會將五個模型的結果取平均,再依據平均後的排名輸出前三名的類別。細節參數如下,未提及之參數係依照 pytorch 預設值,未進行修改:

  • Num of epochs: 100 epochs,若 validation loss 連續 10 個 epochs 未創新低,則提前終止該 fold 的訓練。
  • Batch size: 512。
  • Loss: MSE。
  • Optimizer: ADAM with learning rate 0.01。
  • Learning rate scheduler: 每個 epoch 下降為上一次的 0.95 倍,直至其低於 0.0001 為止。
Owner
Wang, Chung-Che
Wang, Chung-Che
Code for "Learning Graph Cellular Automata"

Learning Graph Cellular Automata This code implements the experiments from the NeurIPS 2021 paper: "Learning Graph Cellular Automata" Daniele Grattaro

Daniele Grattarola 37 Oct 26, 2022
1st place solution to the Satellite Image Change Detection Challenge hosted by SenseTime

1st place solution to the Satellite Image Change Detection Challenge hosted by SenseTime

Lihe Yang 209 Jan 01, 2023
Sparse Physics-based and Interpretable Neural Networks

Sparse Physics-based and Interpretable Neural Networks for PDEs This repository contains the code and manuscript for research done on Sparse Physics-b

28 Jan 03, 2023
Aggragrating Nested Transformer Official Jax Implementation

NesT is a simple method, which aggragrates nested local transformers on image blocks. The idea makes vision transformers attain better accuracy, data efficiency, and convergence on the ImageNet bench

Google Research 169 Dec 20, 2022
Project of 'TBEFN: A Two-branch Exposure-fusion Network for Low-light Image Enhancement '

TBEFN: A Two-branch Exposure-fusion Network for Low-light Image Enhancement Codes for TMM20 paper "TBEFN: A Two-branch Exposure-fusion Network for Low

KUN LU 31 Nov 06, 2022
Official repository for "Deep Recurrent Neural Network with Multi-scale Bi-directional Propagation for Video Deblurring".

RNN-MBP Deep Recurrent Neural Network with Multi-scale Bi-directional Propagation for Video Deblurring (AAAI-2022) by Chao Zhu, Hang Dong, Jinshan Pan

SIV-LAB 22 Aug 31, 2022
Evaluation toolkit of the informative tracking benchmark comprising 9 scenarios, 180 diverse videos, and new challenges.

Informative-tracking-benchmark Informative tracking benchmark (ITB) higher diversity. It contains 9 representative scenarios and 180 diverse videos. m

Xin Li 15 Nov 26, 2022
Relative Uncertainty Learning for Facial Expression Recognition

Relative Uncertainty Learning for Facial Expression Recognition The official implementation of the following paper at NeurIPS2021: Title: Relative Unc

35 Dec 28, 2022
codes for paper Combining Dynamic Local Context Focus and Dependency Cluster Attention for Aspect-level sentiment classification

DLCF-DCA codes for paper Combining Dynamic Local Context Focus and Dependency Cluster Attention for Aspect-level sentiment classification. submitted t

15 Aug 30, 2022
A PyTorch implementation: "LASAFT-Net-v2: Listen, Attend and Separate by Attentively aggregating Frequency Transformation"

LASAFT-Net-v2 Listen, Attend and Separate by Attentively aggregating Frequency Transformation Woosung Choi, Yeong-Seok Jeong, Jinsung Kim, Jaehwa Chun

Woosung Choi 29 Jun 04, 2022
Mmdet benchmark with python

mmdet_benchmark 本项目是为了研究 mmdet 推断性能瓶颈,并且对其进行优化。 配置与环境 机器配置 CPU:Intel(R) Core(TM) i9-10900K CPU @ 3.70GHz GPU:NVIDIA GeForce RTX 3080 10GB 内存:64G 硬盘:1T

杨培文 (Yang Peiwen) 24 May 21, 2022
Combinatorially Hard Games where the levels are procedurally generated

puzzlegen Implementation of two procedurally simulated environments with gym interfaces. IceSlider: the agent needs to reach and stop on the pink squa

Autonomous Learning Group 3 Jun 26, 2022
Vehicle direction identification consists of three module detection , tracking and direction recognization.

Vehicle-direction-identification Vehicle direction identification consists of three module detection , tracking and direction recognization. Algorithm

5 Nov 15, 2022
Camera ready code repo for the NeuRIPS 2021 paper: "Impression learning: Online representation learning with synaptic plasticity".

Impression-Learning-Camera-Ready Camera ready code repo for the NeuRIPS 2021 paper: "Impression learning: Online representation learning with synaptic

2 Feb 09, 2022
An automated algorithm to extract the linear blend skinning (LBS) from a set of example poses

Dem Bones This repository contains an implementation of Smooth Skinning Decomposition with Rigid Bones, an automated algorithm to extract the Linear B

Electronic Arts 684 Dec 26, 2022
An implementation of Deep Graph Infomax (DGI) in PyTorch

DGI Deep Graph Infomax (Veličković et al., ICLR 2019): https://arxiv.org/abs/1809.10341 Overview Here we provide an implementation of Deep Graph Infom

Petar Veličković 491 Jan 03, 2023
Training, generation, and analysis code for Learning Particle Physics by Example: Location-Aware Generative Adversarial Networks for Physics

Location-Aware Generative Adversarial Networks (LAGAN) for Physics Synthesis This repository contains all the code used in L. de Oliveira (@lukedeo),

Deep Learning for HEP 57 Oct 22, 2022
Tutorial on active learning with the Nvidia Transfer Learning Toolkit (TLT).

Active Learning with the Nvidia TLT Tutorial on active learning with the Nvidia Transfer Learning Toolkit (TLT). In this tutorial, we will show you ho

Lightly 25 Dec 03, 2022
Code, pre-trained models and saliency results for the paper "Boosting RGB-D Saliency Detection by Leveraging Unlabeled RGB Images".

Boosting RGB-D Saliency Detection by Leveraging Unlabeled RGB This repository is the official implementation of the paper. Our results comming soon in

Xiaoqiang Wang 8 May 22, 2022
Agent-based model simulator for air quality and pandemic risk assessment in architectural spaces

Agent-based model simulation for air quality and pandemic risk assessment in architectural spaces. User Guide archABM is a fast and open source agent-

Vicomtech 10 Dec 05, 2022