Extreme Dynamic Classifier Chains - XGBoost for Multi-label Classification

Related tags

Deep LearningXDCC
Overview

Extreme Dynamic Classifier Chains

Classifier chains is a key technique in multi-label classification, sinceit allows to consider label dependencies effectively. However, the classifiers arealigned according to a static order of the labels. In the concept of dynamic classifier chains (DCC) the label ordering is chosen for each prediction dynamically depending on the respective instance at hand. We combine this concept with the boosting of extreme gradient boosted trees (XGBoot), an effective and scalable state-of-the-art technique, and incorporate DCC in a fast multi-label extension of XGBoost which we make publicly available. As only positive labels have to be predicted and these are usually only few, the training costs can be further substantially reduced. Moreover, as experiments on ten datasets show, the length of the chain allows for a more control over the usage of previous predictions and hence over the measure one want to optimize,

Installation

The first step requires to build the modified multilabel version of XGBoost and install the resulting python package to build the dynamic chain model. This requires MinGW, i.e. the mingw32-make command, and Python 3. To start the build run the following commands:

cd XGBoost_ML
mingw32-make -j4

After a successful execution the python package can be installed.

cd python-package
python setup.py install

You should now be able to import the package into your Python project:

import xgboost as xgb

Training the Dynamic Chain Model

We recommend running the models by calling train_dcc.py from within a console. Place all datasets as .arff files into the datasets directory. Append -train to the train set and -test to the test set.

Parameters:

The following parameters are available:

Parameter Short Description Required
--filename <string> -f Name of your dataset .arff file located in the datasets sub-directory yes
--num_labels <int> -l Number of Labels in the dataset yes
--models <string> -m Specifies all models that will be build. Available options:
  • dcc: The proposed dynamic chain model
  • sxgb: A single multilabel XGBoost model
  • cc-dcc: A classifier chain with the label order of a previously built dynamic chain
  • cc-freq: A classifier chain with a label order sorted by label frequency (frequent to rare) in the train set
  • cc-rare: A classifier chain with a label order sorted by label frequency (rare to frequent) in the train set
  • cc-rand: A classifier chain with a random label order
  • br: A binary relevance model
example: -m "dc,br"
yes
--validation <int> -v Size of validation set. The first XX% of the train set will be used for validating the model. If the parameter is not set, the test set will be used for evaluation. Example: --validation 20 The frist 20% will be used for evaluation, the last 80% for training. (default: 0) no
--max_depth <int> -d Max depth of each XGBoost multilabel tree (default: 10) no
--num_rounds <int> -r Number of boosting rounds of each XGBoost model (default: 10) no
--chain_length <int> -c Length of the chain. Represents number of labeling-rounds. Each round builds a new XGBoost model that will predict a single label per instance (default: num_labels) no
--split <int> -s Index of split method used for building the trees. Available options:
  • maxGain: 1
  • maxWeight: 2
  • sumGain: 3
  • sumWeight: 4
  • maxAbsGain: 5
  • sumAbsGain: 6
(default: 1)
no
--parameters <string> -p XGBoost parameters used for each model in the chain. Example: -p "{'silent':1, 'eta':0.1}" (default: {}) no
--features_to_transform <string> -t A list of all features in the dataset that have to be encoded. XGBoost can only process numerical features. Use this parameter to encode categorical features. Example: -t "featureA,featureB" no
--output_extra -o Write extended log and json files (default: True) no

Example

We train two models, the dynamic chain and a binary relevance model, on a dataset called emotions with 6 labels. So we specify the models with -m "dc, br" and the dataset with -f "emotions". Additionally we place the files for training and testing into the datasets directory:

project
│   README.md
│   train_dcc.py   
│
└───datasets
│   │   emotions-train.arff
│   │   emotions-test.arff
│   
└───XGBoost_ML
    │   ...

The dcc model should build a full chain with 6 models, so we use -l 6. All XGBoost models, also the one for binary relevance, should train for 100 rounds with a maximum tree depth of 10 and a step size of 0.1. Therefore we add -p "{'eta':0.1}" -r 100 -d 10

The full command to train and evaluate both models is:

 train_dcc.py -p "{'eta':0.1}" -f "emotions" -l 6 -r 100 -d 10 -c 6 -m 'dcc, br'
PyTorch Implementation of Region Similarity Representation Learning (ReSim)

ReSim This repository provides the PyTorch implementation of Region Similarity Representation Learning (ReSim) described in this paper: @Article{xiao2

Tete Xiao 74 Jan 03, 2023
This repository contains the DendroMap implementation for scalable and interactive exploration of image datasets in machine learning.

DendroMap DendroMap is an interactive tool to explore large-scale image datasets used for machine learning. A deep understanding of your data can be v

DIV Lab 33 Dec 30, 2022
Learning Chinese Character style with conditional GAN

zi2zi: Master Chinese Calligraphy with Conditional Adversarial Networks Introduction Learning eastern asian language typefaces with GAN. zi2zi(字到字, me

Yuchen Tian 2.2k Jan 02, 2023
Automatic caption evaluation metric based on typicality analysis.

SeMantic and linguistic UndeRstanding Fusion (SMURF) Automatic caption evaluation metric described in the paper "SMURF: SeMantic and linguistic UndeRs

Joshua Feinglass 6 Jan 09, 2022
FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable Model from a Hybrid Dataset (CVPR2022)

FaceVerse FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable Model from a Hybrid Dataset Lizhen Wang, Zhiyuan Chen, Tao Yu, Chenguang

Lizhen Wang 219 Dec 28, 2022
The offcial repository for 'CharacterBERT and Self-Teaching for Improving the Robustness of Dense Retrievers on Queries with Typos', SIGIR2022

CharacterBERT-DR The offcial repository for CharacterBERT and Self-Teaching for Improving the Robustness of Dense Retrievers on Queries with Typos, Sh

ielab 11 Nov 15, 2022
Code release for the ICML 2021 paper "PixelTransformer: Sample Conditioned Signal Generation".

PixelTransformer Code release for the ICML 2021 paper "PixelTransformer: Sample Conditioned Signal Generation". Project Page Installation Please insta

Shubham Tulsiani 24 Dec 17, 2022
The official PyTorch code implementation of "Personalized Trajectory Prediction via Distribution Discrimination" in ICCV 2021.

Personalized Trajectory Prediction via Distribution Discrimination (DisDis) The official PyTorch code implementation of "Personalized Trajectory Predi

25 Dec 20, 2022
Microsoft Cognitive Toolkit (CNTK), an open source deep-learning toolkit

CNTK Chat Windows build status Linux build status The Microsoft Cognitive Toolkit (https://cntk.ai) is a unified deep learning toolkit that describes

Microsoft 17.3k Dec 29, 2022
Repository for benchmarking graph neural networks

Benchmarking Graph Neural Networks Updates Nov 2, 2020 Project based on DGL 0.4.2. See the relevant dependencies defined in the environment yml files

NTU Graph Deep Learning Lab 2k Jan 03, 2023
Code and data for the EMNLP 2021 paper "Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts". Coming soon!

ToxiChat Code and data for the EMNLP 2021 paper "Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts". Install depen

Ashutosh Baheti 11 Jan 01, 2023
SegTransVAE: Hybrid CNN - Transformer with Regularization for medical image segmentation

SegTransVAE: Hybrid CNN - Transformer with Regularization for medical image segmentation This repo is the official implementation for SegTransVAE. Seg

Nguyen Truong Hai 4 Aug 04, 2022
Pytorch0.4.1 codes for InsightFace

InsightFace_Pytorch Pytorch0.4.1 codes for InsightFace 1. Intro This repo is a reimplementation of Arcface(paper), or Insightface(github) For models,

1.5k Jan 01, 2023
source code and pre-trained/fine-tuned checkpoint for NAACL 2021 paper LightningDOT

LightningDOT: Pre-training Visual-Semantic Embeddings for Real-Time Image-Text Retrieval This repository contains source code and pre-trained/fine-tun

Siqi 65 Dec 26, 2022
Simple image captioning model - CLIP prefix captioning.

CLIP prefix captioning. Inference Notebook: 🥳 New: 🥳 Our technical papar is finally out! Official implementation for the paper "ClipCap: CLIP Prefix

688 Jan 04, 2023
ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs

(Comet-) ATOMIC 2020: On Symbolic and Neural Commonsense Knowledge Graphs Paper Jena D. Hwang, Chandra Bhagavatula, Ronan Le Bras, Jeff Da, Keisuke Sa

AI2 152 Dec 27, 2022
NVIDIA Deep Learning Examples for Tensor Cores

NVIDIA Deep Learning Examples for Tensor Cores Introduction This repository provides State-of-the-Art Deep Learning examples that are easy to train an

NVIDIA Corporation 10k Dec 31, 2022
RE3: State Entropy Maximization with Random Encoders for Efficient Exploration

State Entropy Maximization with Random Encoders for Efficient Exploration (RE3) (ICML 2021) Code for State Entropy Maximization with Random Encoders f

Younggyo Seo 47 Nov 29, 2022
This repo contains the implementation of YOLOv2 in Keras with Tensorflow backend.

Easy training on custom dataset. Various backends (MobileNet and SqueezeNet) supported. A YOLO demo to detect raccoon run entirely in brower is accessible at https://git.io/vF7vI (not on Windows).

Huynh Ngoc Anh 1.7k Dec 24, 2022
Distributed Asynchronous Hyperparameter Optimization better than HyperOpt.

UltraOpt : Distributed Asynchronous Hyperparameter Optimization better than HyperOpt. UltraOpt is a simple and efficient library to minimize expensive

98 Aug 16, 2022