LLVM-based compiler for LightGBM gradient-boosted trees. Speeds up prediction by ≥10x.

Overview

lleaves 🍃

CI Documentation Status Downloads

A LLVM-based compiler for LightGBM decision trees.

lleaves converts trained LightGBM models to optimized machine code, speeding-up prediction by ≥10x.

Example

lgbm_model = lightgbm.Booster(model_file="NYC_taxi/model.txt")
%timeit lgbm_model.predict(df)
# 12.77s

llvm_model = lleaves.Model(model_file="NYC_taxi/model.txt")
llvm_model.compile()
%timeit llvm_model.predict(df)
# 0.90s 

Why lleaves?

  • Speed: Both low-latency single-row prediction and high-throughput batch-prediction.
  • Drop-in replacement: The interface of lleaves.Model is a subset of LightGBM.Booster.
  • Dependencies: llvmlite and numpy. LLVM comes statically linked.

Installation

conda install -c conda-forge lleaves or pip install lleaves (Linux and MacOS only).

Benchmarks

Ran on a dedicated Intel i7-4770 Haswell, 4 cores. Stated runtime is the minimum over 20.000 runs.

Dataset: NYC-taxi

mostly numerical features.

batchsize 1 10 100
LightGBM 52.31μs 84.46μs 441.15μs
ONNX Runtime 11.00μs 36.74μs 190.87μs
Treelite 28.03μs 40.81μs 94.14μs
lleaves 9.61μs 14.06μs 31.88μs

Dataset: MTPL2

mix of categorical and numerical features.

batchsize 10,000 100,000 678,000
LightGBM 95.14ms 992.47ms 7034.65ms
ONNX Runtime 38.83ms 381.40ms 2849.42ms
Treelite 38.15ms 414.15ms 2854.10ms
lleaves 5.90ms 56.96ms 388.88ms

Advanced usage

To avoid any Python overhead during prediction you can link directly against the generated binary. See benchmarks/c_bench/ for an example of how to do this. The function signature can change between major versions.

Development

conda env create
conda activate lleaves
pip install -e .
pre-commit install
./benchmarks/data/setup_data.sh
pytest
Comments
  • How can we reduce the size of the compiled file?

    How can we reduce the size of the compiled file?

    Hello Simon,

    I tried to compile a LightGBM model using both LLeaves and TreeLite.

    I found that the compiled .so file by LLeaves is ~80% larger than the files compiled by TreeLite.

    I want ask that is it possible to reduce the size of the compiled .so file by LLeaves?

    opened by fuyw 7
  • Only one CPU is used in prediction

    Only one CPU is used in prediction

    Hi, We installed lleaves using pip install lleaves. We found that the prediction can only utilize one CPU core, though we set n_jobs=8, while we have 8 CPU cores. This is inconsistent with the lleaves code here: https://github.com/siboehm/lleaves/blob/master/lleaves/lleaves.py#L140

    Why will that happen? Any suggestions are highly appreciated.

    opened by jiazou-bigdata 4
  • extract_pandas_traintime_categories: return empty list if pandas_categorical is null in model file

    extract_pandas_traintime_categories: return empty list if pandas_categorical is null in model file

    In lightgbm model file, pandas_categorical may be null. When call data_processing._dataframe_to_ndarray, exception happend caused by if len(cat_cols) != len(pd_traintime_categories) TypeError: object of type 'NoneType' has no len()

    opened by chenglin 4
  • saving models persistently

    saving models persistently

    First of all, thank you for your impressive work. I wanted to ask if there is a way to store the compiled models in a persistent way. In my case the predictor is composed by ~100 LightGBM models, so that the compiling of the full predictor is highly time consuming. When I tried to pickle the compiled lleaves model, I got:

    ValueError: ctypes objects containing pointers cannot be pickled

    for which I guess there is no easy workaround. Do you know if it possible to avoid re-compilation of the original LightGBM instances? Thank you

    opened by nepslor 4
  • Benchmarking

    Benchmarking

    I tried benchmarking lleaves vs treelite and found that lleaves is slightly slower than treelite. I might be doing something wrong?

    I benchmark in google benchmark with batch size 1 and random features. I have ~600 trees with 450 leaves and max depth 13. Treelite is compiled with clang 10.0. I think we did see that treelite was a lot slower using GCC.

    I noticed that the compile step for lleaves took severeal hours, so maybe the forest I'm using is somehow off?

    In any case I think your library looks very nice :)

    Xeon E2278G

    ------------------------------------------------------
    Benchmark            Time             CPU   Iterations
    ------------------------------------------------------
    BM_LLEAVES       32488 ns        32487 ns        21564
    BM_TREELITE      27251 ns        27250 ns        25635
    

    EPYC 7402P

    ------------------------------------------------------
    Benchmark            Time             CPU   Iterations
    ------------------------------------------------------
    BM_LLEAVES       38020 ns        38019 ns        18308
    BM_TREELITE      32155 ns        32154 ns        21579
    
    #include <benchmark/benchmark.h>
    #include <random>
    #include "lleavesheader.h"
    #include "treeliteheader.h"
    #include <vector>
    #include <random>
    #include <iostream>
    
    
    constexpr int NUM_FATURES = 108;
    
    static void BM_LLEAVES(benchmark::State& state)
    {
    
        std::random_device dev;
        std::mt19937 rng(dev());
        std::uniform_int_distribution<std::mt19937::result_type> dist(-10,10); 
    
        std::size_t N = 10000000;
        std::vector<double> f;
        f.reserve(N);
        for (std::size_t i = 0; i<N; ++i){
            f.push_back(dist(rng));
        }
    
        double out;
        std::size_t i = 0;
        for (auto _ : state) {
            forest_root(f.data()+NUM_FATURES*i, &out, (int)0, (int)1);
            ++i;
        }
    }
    
    static void BM_TREELITE(benchmark::State& state)
    {
    
        std::random_device dev;
        std::mt19937 rng(dev());
        std::uniform_int_distribution<std::mt19937::result_type> dist(-10,10); // distribution in range [1, 6]
    
        std::size_t N = 10000000;
        std::vector<Entry> f;
        f.reserve(N);
        for (std::size_t i = 0; i<N; ++i){
            auto e = DE::Entry();
            e.missing = -1;
            e.fvalue = dist(rng);
            e.qvalue = 0;
            f.push_back(e);
        }
    
        std::size_t i = 0;
        union DE::Entry *pFeatures = nullptr; 
        for (auto _ : state) {
            pFeatures = f.data()+NUM_FATURES*i;
            predict(pFeatures, 1);   // call treelite predict function
    
            ++i;
            
        }
    }
    BENCHMARK(BM_LLEAVES);
    BENCHMARK(BM_TREELITE);
    BENCHMARK_MAIN();
    
    opened by skaae 4
  • Additional performance benchmarks

    Additional performance benchmarks

    Hi, currently evaluating this as a potential performance enhancement on our MLOps / Inference stack.

    Tought I'd give some numbers here (based on MacBook Pro 2019).

    Test set up as follows: a) generate artificial data X = 1E6 x 200 float64, Y = X.sum() for regression, Y = X.sum() > 100 for binary classifier b) for n_feat in [...] -> fit model on 1000 samples and n_feat features; compile model c) for batchsize in [...] -> predict 10 times a randomly sampled batch of all data items, using (1) LGBM.predict(), (2). lleaves.predict(), (3) lleaves.predict(n_jobs=1); measure TOTAL time taken

    For regression results are:

    image

    Independent of the number of features, the break-even between parallel lleaves and 1 job seems to be around 1k samples at once, independent of the number of features. Using this logic, we would get better performance than LGBM at all number of samples.

    For classification:

    image

    Also, here, the break-even is around 1k samples.

    For classification with HIGHLY IMBALANCED data (1/50 positive), the break-even is only at 10k samples - Any ideas on why this is the case?

    image

    opened by Zahlii 4
  • [Question] how does model cache play with distributed workers with different CPUs?

    [Question] how does model cache play with distributed workers with different CPUs?

    Hello, thank you for this great library. I have a question about the model cache file. I am using Ray to manage a small cluster of PCs with both Intel/AMD CPUs, and different OS (Ubuntu/ClearLinux). My program has been using numba to speed things up, and the JIT mode (instead AIT mode) works fine. Ray can send the numba functions to different PCs in the cluster and they compile locally.

    So for lleaves, if I compile the models on one node, and distribute the generated cache file to all nodes in the cluster, will it work? or I have to stick to the "JIT" mode, where models are always compiled locally each time? I am using ensemble methods with many lgbm models (total >1000, each is small about 100 trees, max_depth 10). Or maybe I should have all models compiled locally on each PC? Thank you.

    opened by crayonfu 3
  • How to use multiple models via the C_API?

    How to use multiple models via the C_API?

    Hi Simon, many thanks for the nice work. I have a question about using the C_API:

    If I have 2 LightGBM models in my application, and I want to predict using the C_API. I might need to have following two functions:

    void forest_root_model1(double *, double *, int, int);
    
    void forest_root_model2(double *, double *, int, int);
    

    Do I need to modify the llvm_model.compile() function to change the function names?

    opened by fuyw 3
  • Does this cause core dump ?

    Does this cause core dump ?

    Recently, I find that one of my model will cause core dump if I use lleaves for predict.

    I am confused about two functions below.

    In codegen.py, function param type can be int* if param is categorical

    def make_tree(tree):
        # declare the function for this tree
        func_dtypes = (INT_CAT if f.is_categorical else DOUBLE for f in tree.features)
        scalar_func_t = ir.FunctionType(DOUBLE, func_dtypes)
        tree_func = ir.Function(module, scalar_func_t, name=str(tree))
        tree_func.linkage = "private"
        # populate function with IR
        gen_tree(tree, tree_func)
        return LTree(llvm_function=tree_func, class_id=tree.class_id)
    

    But in data_processing.py with predict used, all feature param are convert to double*

    def ndarray_to_ptr(data: np.ndarray):
        """
        Takes a 2D numpy array, converts to float64 if necessary and returns a pointer
    
        :param data: 2D numpy array. Copying is avoided if possible.
        :return: pointer to 1D array of dtype float64.
        """
        # ravel makes sure we get a contiguous array in memory and not some strided View
        data = data.astype(np.float64, copy=False, casting="same_kind").ravel()
        ptr = data.ctypes.data_as(POINTER(c_double))
        return ptr
    

    Is this just like

    int* predict(int* a, double* b);
    double a = 1.1;
    double b = 2.2;
    predict(&a, &b);
    

    Does this will happy in lleaves?

    opened by chenglin 3
  • compile with multiple threads

    compile with multiple threads

    I find that compile can only use one cpu core. For my model, it may take long time to compile.

    Can make compile with multiple threads, just like make -j ?

    opened by chenglin 3
  • Accept boosters as model inputs?

    Accept boosters as model inputs?

    Model currently requires the path to a model file. I was wondering if it'd make sense to also accept a booster. We could call to_string and save it as a temporary file or just work with the string representation directly. It'd make users' life (a little) easier.

    opened by lbittarello 3
  • Bump pypa/gh-action-pypi-publish from 1.5.2 to 1.6.4

    Bump pypa/gh-action-pypi-publish from 1.5.2 to 1.6.4

    Bumps pypa/gh-action-pypi-publish from 1.5.2 to 1.6.4.

    Release notes

    Sourced from pypa/gh-action-pypi-publish's releases.

    v1.6.4

    oh, boi! again?

    This is the last one tonight, promise! It fixes this embarrassing bug that was actually caught by the CI but got overlooked due to the lack of sleep. TL;DR GH passed $HOME from the external env into the container and that tricked the Python's site module to think that the home directory is elsewhere, adding non-existent paths to the env vars. See #115.

    Full Diff: https://github.com/pypa/gh-action-pypi-publish/compare/v1.6.3...v1.6.4

    v1.6.3

    Another Release!? Why?

    In pypa/gh-action-pypi-publish#112, it was discovered that passing a $PATH variable even breaks the shebang. So this version adds more safeguards to make sure it keeps working with a fully broken $PATH.

    Full Diff: https://github.com/pypa/gh-action-pypi-publish/compare/v1.6.2...v1.6.3

    v1.6.2

    What's Fixed

    • Made the $PATH and $PYTHONPATH environment variables resilient to broken values passed from the host runner environment, which previously allowed the users to accidentally break the container's internal runtime as reported in pypa/gh-action-pypi-publish#112

    Internal Maintenance Improvements

    New Contributors

    Full Diff: https://github.com/pypa/gh-action-pypi-publish/compare/v1.6.1...v1.6.2

    v1.6.1

    What's happened?!

    There was a sneaky bug in v1.6.0 which caused Twine to be outside the import path in the Python runtime. It is fixed in v1.6.1 by updating $PYTHONPATH to point to a correct location of the user-global site-packages/ directory.

    Full Diff: https://github.com/pypa/gh-action-pypi-publish/compare/v1.6.0...v1.6.1

    v1.6.0

    Anything's changed?

    The only update is that the Python runtime has been upgraded from 3.9 to 3.11. There are no functional changes in this release.

    Full Changelog: https://github.com/pypa/gh-action-pypi-publish/compare/v1.5.2...v1.6.0

    Commits
    • c7f29f7 🐛 Override $HOME in the container with /root
    • 644926c 🧪 Always run smoke testing in debug mode
    • e71a4a4 Add support for verbose bash execusion w/ $DEBUG
    • e56e821 🐛 Make id always available in twine-upload
    • c879b84 🐛 Use full path to bash in shebang
    • 57e7d53 🐛Ensure the default $PATH value is pre-loaded
    • ce291dc 🎨🐛Fix the branch @ pre-commit.ci badge links
    • 102d8ab 🐛 Rehardcode devpi port for GHA srv container
    • 3a9eaef 🐛Use different ports in/out of GHA containers
    • a01fa74 🐛 Use localhost @ GHA outside the containers
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies 
    opened by dependabot[bot] 0
  • lleaves costs too much memory

    lleaves costs too much memory

    Thanks for your great work. I want to try it out, but the memory consumption is staggering. I can't use it on a machine with 32G memory, out of memory will occur Is there something I haven't set right?

    import lleaves
    import pdb
    
    MODEL_TXT_PATH = '/home/kk/models/lgb080401_amp.txt'
    llvm_model = lleaves.Model(model_file=MODEL_TXT_PATH)
    # num = llvm_model.num_trees()
    # pdb.set_trace()
    llvm_model.compile(cache='./lleaves.bin', fblocksize=34)
    
    opened by 111qqz 5
  • Platform interoperability

    Platform interoperability

    Is there a way to effectively check if compiled models are able to run on a machine?

    I am running predictions on various platforms, when loading the compiled model, I load the one which was compiled on the same platform (using: PLATFORM = sys.platform + '-' + sysconfig.get_platform().split('-')[-1].lower(), resulting in either darwin-arm64 or linux-x86_64). However sometimes models which are compiled in a linux-x86_64 environment, are not interoperable with other linux-x86_64 machines (I use AWS Fargate, which runs the container on whatever hardware is available). This results in exit code 132 (Illegal Instruction) in the model.predict() loop.

    The underlying reason is probably that the underlying machines are not the same architecture (ARM based?). For example, when I compile a model within a Docker container (with DOCKER_DEFAULT_PLATFORM=linux/amd64) on my M1 Mac, it registers the platform as linux-x86_64, but the model cannot be used on AWS linux machine using Docker.

    What would be a solid way to go about this issue? Is there some LLVM version which I need to look at in order for models to be interoperable?

    Thanks a lot.

    opened by TomScheffers 3
  • Improve Python API for model serialization

    Improve Python API for model serialization

    I am loading a model with thousands of trees, which takes approx. 10minutes. Therefore I want to compile the model once and then serialize it to file. Pickle or dill give the following error: "ValueError: ctypes objects containing pointers cannot be pickled". Is there a way to save/load the file to/from disk? Thanks :)

    enhancement 
    opened by TomScheffers 3
  • Windows support

    Windows support

    Forgive me, I’m ignorant of compilation nuances on different operating systems. Is windows support on PyPi possible? I see that windows support was removed in a PR about a year ago, but there are no notes.

    opened by AnotherSamWilson 1
Releases(0.2.7)
  • 0.2.7(Aug 10, 2022)

    What's Changed

    • Avoid undefined behaviour / poison by checking for NaNs before llvm::fptosi by @siboehm in https://github.com/siboehm/lleaves/pull/23. This broke categorical predictions when NaNs occurred, but only on ARM arch.

    Full Changelog: https://github.com/siboehm/lleaves/compare/0.2.6...0.2.7

    Source code(tar.gz)
    Source code(zip)
  • 0.2.6(Jul 10, 2022)

    Minor new feature: Allow specification of the root function's name in the compiled binary. This enables linking against multiple lleaves-compiled trees. Thanks @fuyw!

    What's Changed

    • Chore: Bump pre-commit and Github actions + py3.10 on CI by @siboehm in https://github.com/siboehm/lleaves/pull/22
    • add function_name to compiler by @fuyw in https://github.com/siboehm/lleaves/pull/21

    New Contributors

    • @fuyw made their first contribution in https://github.com/siboehm/lleaves/pull/21

    Full Changelog: https://github.com/siboehm/lleaves/compare/0.2.5...0.2.6

    Source code(tar.gz)
    Source code(zip)
  • 0.2.5(Mar 23, 2022)

  • 0.2.4(Nov 22, 2021)

  • 0.2.3(Nov 21, 2021)

  • 0.2.2(Sep 26, 2021)

    • Compiler flags to tune performance & compilation speed: fblocksize, finline, fcodemodel.
    • Compile parameter raw_score, equivalent to the raw_score parameter of LightGBM's Booster.predict().
    Source code(tar.gz)
    Source code(zip)
  • 0.2.1(Sep 2, 2021)

  • 0.2.0(Jul 28, 2021)

    Focus on performance improvements.

    • Instruction cache blocking
    • Agressive function inlining
    • Proper native arch targeting
    • Objective functions lowered into IR

    Small models now run ~30% faster, large models ~300% faster.

    Plus: code refactor for readability

    Source code(tar.gz)
    Source code(zip)
  • 0.1.1(Jun 27, 2021)

  • 0.1.0(Jun 26, 2021)

Owner
Simon Boehm
Data Engineering @QuantCo | Master's thesis @theislab | CS student @ ETH Zurich.
Simon Boehm
A particular navigation route using satellite feed and can help in toll operations & traffic managemen

How about adding some info that can quanitfy the stress on a particular navigation route using satellite feed and can help in toll operations & traffic management The current analysis is on the satel

Ashish Pandey 1 Feb 14, 2022
Includes PyTorch -> Keras model porting code for ConvNeXt family of models with fine-tuning and inference notebooks.

ConvNeXt-TF This repository provides TensorFlow / Keras implementations of different ConvNeXt [1] variants. It also provides the TensorFlow / Keras mo

Sayak Paul 87 Dec 06, 2022
Public Implementation of ChIRo from "Learning 3D Representations of Molecular Chirality with Invariance to Bond Rotations"

Learning 3D Representations of Molecular Chirality with Invariance to Bond Rotations This directory contains the model architectures and experimental

35 Dec 05, 2022
This is the official code of our paper "Diversity-based Trajectory and Goal Selection with Hindsight Experience Relay" (PRICAI 2021)

Diversity-based Trajectory and Goal Selection with Hindsight Experience Replay This is the official implementation of our paper "Diversity-based Traje

Tianhong Dai 6 Jul 18, 2022
HNN: Human (Hollywood) Neural Network

HNN: Human (Hollywood) Neural Network Learn the top 1000 actors on IMDB with your very own low cost, highly parallel, CUDAless biological neural netwo

Madhava Jay 0 Dec 21, 2021
This repository provides the code for MedViLL(Medical Vision Language Learner).

MedViLL This repository provides the code for MedViLL(Medical Vision Language Learner). Our proposed architecture MedViLL is a single BERT-based model

SuperSuperMoon 39 Jan 05, 2023
A PyTorch implementation of EfficientDet.

A PyTorch impl of EfficientDet faithful to the original Google impl w/ ported weights

Ross Wightman 1.4k Jan 07, 2023
We simulate traveling back in time with a modern camera to rephotograph famous historical subjects.

[SIGGRAPH Asia 2021] Time-Travel Rephotography [Project Website] Many historical people were only ever captured by old, faded, black and white photos,

298 Jan 02, 2023
Data labels and scripts for fastMRI.org

fastMRI+: Clinical pathology annotations for the fastMRI dataset The fastMRI dataset is a publicly available MRI raw (k-space) dataset. It has been us

Microsoft 51 Dec 22, 2022
Code accompanying the paper Shared Independent Component Analysis for Multi-subject Neuroimaging

ShICA Code accompanying the paper Shared Independent Component Analysis for Multi-subject Neuroimaging Install Move into the ShICA directory cd ShICA

8 Nov 07, 2022
This is a yolo3 implemented via tensorflow 2.7

YoloV3 - an object detection algorithm implemented via TF 2.x source code In this article I assume you've already familiar with basic computer vision

2 Jan 17, 2022
A Python framework for developing parallelized Computational Fluid Dynamics software to solve the hyperbolic 2D Euler equations on distributed, multi-block structured grids.

pyHype: Computational Fluid Dynamics in Python pyHype is a Python framework for developing parallelized Computational Fluid Dynamics software to solve

Mohamed Khalil 21 Nov 22, 2022
Tensorflow implementation of the paper "HumanGPS: Geodesic PreServing Feature for Dense Human Correspondences", CVPR 2021.

HumanGPS: Geodesic PreServing Feature for Dense Human Correspondences Tensorflow implementation of the paper "HumanGPS: Geodesic PreServing Feature fo

Google Interns 50 Dec 21, 2022
PrimitiveNet: Primitive Instance Segmentation with Local Primitive Embedding under Adversarial Metric (ICCV 2021)

PrimitiveNet Source code for the paper: Jingwei Huang, Yanfeng Zhang, Mingwei Sun. [PrimitiveNet: Primitive Instance Segmentation with Local Primitive

Jingwei Huang 47 Dec 06, 2022
Codes for the paper Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing

Contrast and Mix (CoMix) The repository contains the codes for the paper Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Backgroun

Computer Vision and Intelligence Research (CVIR) 13 Dec 10, 2022
NVIDIA Deep Learning Examples for Tensor Cores

NVIDIA Deep Learning Examples for Tensor Cores Introduction This repository provides State-of-the-Art Deep Learning examples that are easy to train an

NVIDIA Corporation 10k Dec 31, 2022
Official PyTorch implementation for paper "Efficient Two-Stage Detection of Human–Object Interactions with a Novel Unary–Pairwise Transformer"

UPT: Unary–Pairwise Transformers This repository contains the official PyTorch implementation for the paper Frederic Z. Zhang, Dylan Campbell and Step

Frederic Zhang 109 Dec 20, 2022
Python implementation of Project Fluent

Project Fluent This is a collection of Python packages to use the Fluent localization system. python-fluent consists of these packages: fluent.syntax

Project Fluent 155 Dec 28, 2022
A voice recognition assistant similar to amazon alexa, siri and google assistant.

kenyan-Siri Build an Artificial Assistant Full tutorial (video) To watch the tutorial, click on the image below Installation For windows users (run th

Alison Parker 3 Aug 19, 2022
Unofficial PyTorch implementation of "RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving" (ECCV 2020)

RTM3D-PyTorch The PyTorch Implementation of the paper: RTM3D: Real-time Monocular 3D Detection from Object Keypoints for Autonomous Driving (ECCV 2020

Nguyen Mau Dzung 271 Nov 29, 2022