MooGBT is a library for Multi-objective optimization in Gradient Boosted Trees.

Overview

Multi-objective Optimized GBT(MooGBT)

MooGBT is a library for Multi-objective optimization in Gradient Boosted Trees. MooGBT optimizes for multiple objectives by defining constraints on sub-objective(s) along with a primary objective. The constraints are defined as upper bounds on sub-objective loss function. MooGBT uses a Augmented Lagrangian(AL) based constrained optimization framework with Gradient Boosted Trees, to optimize for multiple objectives.

With AL, we introduce dual variables in Boosting. The dual variables are iteratively optimized and fit within the Boosting iterations. The Boosting objective function is updated with the AL terms and the gradient is readily derived using the GBT gradients. With the gradient and updates of dual variables, we solve the optimization problem by jointly iterating AL and Boosting steps.

This library is motivated by work done in the paper Multi-objective Relevance Ranking, which introduces an Augmented Lagrangian based method to incorporate multiple objectives (MO) in LambdaMART, which is a GBT based search ranking algorithm.

We have modified the scikit-learn GBT implementation [3] to support multi-objective optimization.

Highlights -

  • follows the scikit-learn API conventions
  • supports all hyperparameters present in scikit-learn GBT
  • supports optimization for more than 1 sub-objectives

  • Current support -

  • MooGBTClassifier - "binomial deviance" loss function, for primary and sub-objectives represented as binary variables
  • MooGBTRegressor - "least squares" loss function, for primary and sub-objectives represented as continuous variables

  • Installation

    Moo-GBT can be installed from PyPI

    pip3 install moo-gbt

    Usage

    from multiobjective_gbt import MooGBTClassifier
    
    mu = 100
    b = 0.7 # upper bound on sub-objective cost
    
    constrained_gbt = MooGBTClassifier(
    				loss='deviance',
    				n_estimators=100,
    				constraints=[{"mu":mu, "b":b}], # One Constraint
    				random_state=2021
    )
    constrained_gbt.fit(X_train, y_train)

    Here y_train contains 2 columns, the first column should be the primary objective. The following columns are all the sub-objectives for which constraints have been specified(in the same order).


    Usage Steps

    1. Run unconstrained GBT on Primary Objective. Unconstrained GBT is just the GBTClassifer/GBTRegressor by scikit-learn
    2. Calculate the loss function value for Primary Objective and sub-objective(s)
      • For MooGBTClassifier calculate Log Loss between predicted probability and sub-objective label(s)
      • For MooGBTRegressor calculate mean squared error between predicted value and sub-objective label(s)
    3. Set the value of hyperparamter b, less than the calculated cost in the previous step and run MooGBTClassifer/MooGBTRegressor with this b. The lower the value of b, the more the sub-objective will be optimized

    Example with multiple binary objectives

    import pandas as pd
    import numpy as np
    import seaborn as sns
    
    from multiobjective_gbt import MooGBTClassifier

    We'll use a publicly available dataset - available here

    We define a multi-objective problem on the dataset, with the primary objective as the column "is_booking" and sub-objective as the column "is_package". Both these variables are binary.

    # Preprocessing Data
    train_data = pd.read_csv('examples/expedia-data/expedia-hotel-recommendations/train_data_sample.csv')
    
    po = 'is_booking' # primary objective
    so = 'is_package' # sub-objective
    
    features =  list(train_data.columns)
    features.remove(po)
    outcome_flag =  po
    
    # Train-Test Split
    X_train, X_test, y_train, y_test = train_test_split(
    					train_data[features],
    					train_data[outcome_flag],
    					test_size=0.2,
    					stratify=train_data[[po, so]],
    					random_state=2021
    )
    
    # Creating y_train_, y_test_ with 2 labels
    y_train_ = pd.DataFrame()
    y_train_[po] = y_train
    y_train_[so] = X_train[so]
    
    y_test_ = pd.DataFrame()
    y_test_[po] = y_test
    y_test_[so] = X_test[so]

    MooGBTClassifier without the constraint parameter, works as the standard scikit-learn GBT classifier.

    unconstrained_gbt = MooGBTClassifier(
    				loss='deviance',
    				n_estimators=100,
    				random_state=2021
    )
    
    unconstrained_gbt.fit(X_train, y_train)

    Get train and test sub-objective costs for unconstrained model.

    def get_binomial_deviance_cost(pred, y):
    	return -np.mean(y * np.log(pred) + (1-y) * np.log(1-pred))
    
    pred_train = unconstrained_gbt.predict_proba(X_train)[:,1]
    pred_test = unconstrained_gbt.predict_proba(X_test)[:,1]
    
    # get sub-objective costs
    so_train_cost = get_binomial_deviance_cost(pred_train, X_train[so])
    so_test_cost = get_binomial_deviance_cost(pred_test, X_test[so])
    
    print (f"""
    Sub-objective cost train - {so_train_cost},
    Sub-objective cost test  - {so_test_cost}
    """)
    Sub-objective cost train - 0.9114,
    Sub-objective cost test  - 0.9145
    

    Constraint is specified as an upper bound on the sub-objective cost. In the unconstrained model, we see the cost of our sub-objective to be ~0.9. So setting upper bounds below 0.9 would optimise the sub-objective.

    b = 0.65 # upper bound on cost
    mu = 100
    constrained_gbt = MooGBTClassifier(
    				loss='deviance',
    				n_estimators=100,
    				constraints=[{"mu":mu, "b":b}], # One Constraint
    				random_state=2021
    )
    
    constrained_gbt.fit(X_train, y_train_)

    From the constrained model, we achieve more than 100% gain in AuROC for the sub-objective while the loss in primary objective AuROC is kept within 6%. The entire study on this dataset can be found in the example notebook.

    Looking at MooGBT primary and sub-objective losses -

    To get raw values of loss functions wrt boosting iteration,

    # return a Pandas dataframe with loss values of objectives wrt boosting iteration
    losses = constrained_gbt.loss_.get_losses()
    losses.head()

    Similarly, you can also look at dual variable(alpha) values for sub-objective(s),

    To get raw values of alphas wrt boosting iteration,

    constrained_gbt.loss_.get_alphas()

    These losses can be used to look at the MooGBT Learning process.

    sns.lineplot(data=losses, x='n_estimators', y='primary_objective', label='primary objective')
    sns.lineplot(data=losses, x='n_estimators', y='sub_objective_1', label='subobjective')
    
    plt.xlabel("# estimators(trees)")
    plt.ylabel("Cost")
    plt.legend(loc = "upper right")

    sns.lineplot(data=losses, x='n_estimators', y='primary_objective', label='primary objective')

    Choosing the right upper bound constraint b and mu value

    The upper bound should be defined based on a acceptable % loss in the primary objective evaluation metric. For stricter upper bounds, this loss would be greater as MooGBT will optimize for the sub-objective more.

    Below table summarizes the effect of the upper bound value on the model performance for primary and sub-objective(s) for the above example.

    %gain specifies the percentage increase in AUROC for the constrained MooGBT model from an uncostrained GBT model.

    b Primary Objective - %gain Sub-Objective - %gain
    0.9 -0.7058 4.805
    0.8 -1.735 40.08
    0.7 -2.7852 62.7144
    0.65 -5.8242 113.9427
    0.6 -9.9137 159.8931

    In general, across our experiments we have found that lower values of mu optimize on the primary objective better while satisfying the sub-objective constraints given enough boosting iterations(n_estimators).

    The below table summarizes the results of varying mu values keeping the upper bound same(b=0.6).

    b mu Primary Objective - %gain Sub-objective - %gain
    0.6 1000 -20.6569 238.1388
    0.6 100 -13.3769 197.8186
    0.6 10 -9.9137 159.8931
    0.6 5 -8.643 146.4171

    MooGBT Learning Process

    MooGBT optimizes for multiple objectives by defining constraints on sub-objective(s) along with a primary objective. The constraints are defined as upper bounds on sub-objective loss function.

    MooGBT differs from a standard GBT in the loss function it optimizes the primary objective C1 and the sub-objectives using the Augmented Lagrangian(AL) constrained optimization approach.

    where α = [α1, α2, α3…..] is a vector of dual variables. The Lagrangian is solved by minimizing with respect to the primal variables "s" and maximizing with respect to the dual variables α. Augmented Lagrangian iteratively solves the constraint optimization. Since AL is an iterative approach we integerate it with the boosting iterations of GBT for updating the dual variable α.

    Alpha(α) update -

    At an iteration k, if the constraint t is not satisfied, i.e., Ct(s) > bt, we have  αtk > αtk-1. Otherwise, if the constraint is met, the dual variable α is made 0.

    Public contents

    • _gb.py: contains the MooGBTClassifier and MooGBTRegressor classes. Contains implementation of the fit and predict function. Extended implementation from _gb.py from scikit-learn.

    • _gb_losses.py: contains BinomialDeviance loss function class, LeastSquares loss function class. Extended implementation from _gb_losses.py from scikit-learn.

    More examples

    The examples directory contains several illustrations of how one can use this library:

    References - 

    [1] Multi-objective Ranking via Constrained Optimization - https://arxiv.org/pdf/2002.05753.pdf
    [2] Multi-objective Relevance Ranking - https://sigir-ecom.github.io/ecom2019/ecom19Papers/paper30.pdf
    [3] Scikit-learn GBT Implementation - GBTClassifier and GBTRegressor

    Owner
    Swiggy
    Swiggy
    Machine Learning e Data Science com Python

    Machine Learning e Data Science com Python Arquivos do curso de Data Science e Machine Learning com Python na Udemy, cliqe aqui para acessá-lo. O prin

    Renan Barbosa 1 Jan 27, 2022
    [HELP REQUESTED] Generalized Additive Models in Python

    pyGAM Generalized Additive Models in Python. Documentation Official pyGAM Documentation: Read the Docs Building interpretable models with Generalized

    daniel servén 747 Jan 05, 2023
    Neural Machine Translation (NMT) tutorial with OpenNMT-py

    Neural Machine Translation (NMT) tutorial with OpenNMT-py. Data preprocessing, model training, evaluation, and deployment.

    Yasmin Moslem 29 Jan 09, 2023
    Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning

    Optuna is an automatic hyperparameter optimization software framework, particularly designed for machine learning. It features an imperative, define-by-run style user API.

    7.4k Jan 04, 2023
    Mosec is a high-performance and flexible model serving framework for building ML model-enabled backend and microservices

    Mosec is a high-performance and flexible model serving framework for building ML model-enabled backend and microservices. It bridges the gap between any machine learning models you just trained and t

    164 Jan 04, 2023
    Falken provides developers with a service that allows them to train AI that can play their games

    Falken provides developers with a service that allows them to train AI that can play their games. Unlike traditional RL frameworks that learn through rewards or batches of offline training, Falken is

    Google Research 223 Jan 03, 2023
    A mindmap summarising Machine Learning concepts, from Data Analysis to Deep Learning.

    A mindmap summarising Machine Learning concepts, from Data Analysis to Deep Learning.

    Daniel Formoso 5.7k Dec 30, 2022
    Implementation of the Object Relation Transformer for Image Captioning

    Object Relation Transformer This is a PyTorch implementation of the Object Relation Transformer published in NeurIPS 2019. You can find the paper here

    Yahoo 158 Dec 24, 2022
    Nevergrad - A gradient-free optimization platform

    Nevergrad - A gradient-free optimization platform nevergrad is a Python 3.6+ library. It can be installed with: pip install nevergrad More installati

    Meta Research 3.4k Jan 08, 2023
    PyHarmonize: Adding harmony lines to recorded melodies in Python

    PyHarmonize: Adding harmony lines to recorded melodies in Python About To use this module, the user provides a wav file containing a melody, the key i

    Julian Kappler 2 May 20, 2022
    Dual Adaptive Sampling for Machine Learning Interatomic potential.

    DAS Dual Adaptive Sampling for Machine Learning Interatomic potential. How to cite If you use this code in your research, please cite this using: Hong

    6 Jul 06, 2022
    Probabilistic time series modeling in Python

    GluonTS - Probabilistic Time Series Modeling in Python GluonTS is a Python toolkit for probabilistic time series modeling, built around Apache MXNet (

    Amazon Web Services - Labs 3.3k Jan 03, 2023
    100 Days of Machine and Deep Learning Code

    💯 Days of Machine Learning and Deep Learning Code MACHINE LEARNING TOPICS COVERED - FROM SCRATCH Linear Regression Logistic Regression K Means Cluste

    Tanishq Gautam 66 Nov 02, 2022
    Winning solution for the Galaxy Challenge on Kaggle

    Winning solution for the Galaxy Challenge on Kaggle

    Sander Dieleman 483 Jan 02, 2023
    A Streamlit demo to interactively visualize Uber pickups in New York City

    Streamlit Demo: Uber Pickups in New York City A Streamlit demo written in pure Python to interactively visualize Uber pickups in New York City. View t

    Streamlit 230 Dec 28, 2022
    Production Grade Machine Learning Service

    This project is made to help you scale from a basic Machine Learning project for research purposes to a production grade Machine Learning web service

    Abdullah Zaiter 10 Apr 04, 2022
    High performance, easy-to-use, and scalable machine learning (ML) package, including linear model (LR), factorization machines (FM), and field-aware factorization machines (FFM) for Python and CLI interface.

    What is xLearn? xLearn is a high performance, easy-to-use, and scalable machine learning package that contains linear model (LR), factorization machin

    Chao Ma 3k Jan 08, 2023
    A simple guide to MLOps through ZenML and its various integrations.

    ZenBytes Join our Slack Community and become part of the ZenML family Give the main ZenML repo a GitHub star to show your love ZenBytes is a series of

    ZenML 127 Dec 27, 2022
    NCVX (NonConVeX): A User-Friendly and Scalable Package for Nonconvex Optimization in Machine Learning.

    NCVX (NonConVeX): A User-Friendly and Scalable Package for Nonconvex Optimization in Machine Learning.

    SUN Group @ UMN 28 Aug 03, 2022
    Simple and flexible ML workflow engine.

    This is a simple and flexible ML workflow engine. It helps to orchestrate events across a set of microservices and create executable flow to handle requests. Engine is designed to be configurable wit

    Katana ML 295 Jan 06, 2023