ACV is a python library that provides explanations for any machine learning model or data.

Overview

Active Coalition of Variables (ACV):

ACV is a python library that aims to explain any machine learning models or data.

  • It gives local rule-based explanations for any model or data.
  • It provides a better estimation of Shapley Values for tree-based model (more accurate than path-dependent TreeSHAP). It also proposes new Shapley Values that have better local fidelity.

We can regroup the different explanations in two groups: Agnostic Explanations and Tree-based Explanations.

See the papers here.

Installation

Requirements

Python 3.6+

OSX: ACV uses Cython extensions that need to be compiled with multi-threading support enabled. The default Apple Clang compiler does not support OpenMP. To solve this issue, obtain the lastest gcc version with Homebrew that has multi-threading enabled: see for example pysteps installation for OSX.

Windows: Install MinGW (a Windows distribution of gcc) or Microsoft’s Visual C

Install the acv package:

$ pip install acv-exp

A. Agnostic explanations

The Agnostic approaches explain any data (X, Y) or model (X, f(X)) using the following explanation methods:

  • Same Decision Probability (SDP) and Sufficient Explanations
  • Sufficient Rules

See the paper Consistent Sufficient Explanations and Minimal Local Rules for explaining regression and classification models for more details.

I. First, we need to fit our explainer (ACXplainers) to input-output of the data (X, Y) or model (X, f(X)) if we want to explain the data or the model respectively.

from acv_explainers import ACXplainer

# It has the same params as a Random Forest, and it should be tuned to maximize the performance.  
acv_xplainer = ACXplainer(classifier=True, n_estimators=50, max_depth=5)
acv_xplainer.fit(X_train, y_train)

roc = roc_auc_score(acv_xplainer.predict(X_test), y_test)

II. Then, we can load all the explanations in a webApp as follow:

import acv_app
import os

# compile the ACXplainer
acv_app.compile_ACXplainers(acv_xplainer, X_train, y_train, X_test, y_test, path=os.getcwd())

# Launch the webApp
acv_app.run_webapp(pickle_path=os.getcwd())

Capture d’écran de 2021-11-03 19-50-12

III. Or we can compute each explanation separately as follow:

Same Decision Probability (SDP)

The main tool of our explanations is the Same Decision Probability (SDP). Given , the same decision probability of variables is the probabilty that the prediction remains the same when we fixed variables or when the variables are missing.

  • How to compute ?
sdp = acv_xplainer.compute_sdp_rf(X, S, data_bground) # data_bground is the background dataset that is used for the estimation. It should be the training samples.

Minimal Sufficient Explanations

The Sufficient Explanations is the Minimal Subset S such that fixing the values permit to maintain the prediction with high probability . See the paper here for more details.

  • How to compute the Minimal Sufficient Explanation ?

    The following code return the Sufficient Explanation with minimal cardinality.

sdp_importance, min_sufficient_expl, size, sdp = acv_xplainer.importance_sdp_rf(X, y, X_train, y_train, pi_level=0.9)
  • How to compute all the Sufficient Explanations ?

    Since the Minimal Sufficient Explanation may not be unique for a given instance, we can compute all of them.

sufficient_expl, sdp_expl, sdp_global = acv_xplainer.sufficient_expl_rf(X, y, X_train, y_train, pi_level=0.9)

Local Explanatory Importance

For a given instance, the local explanatory importance of each variable corresponds to the frequency of apparition of the given variable in the Sufficient Explanations. See the paper here for more details.

  • How to compute the Local Explanatory Importance ?
lximp = acv_xplainer.compute_local_sdp(d=X_train.shape[1], sufficient_expl)

Local rule-based explanations

For a given instance (x, y) and its Sufficient Explanation S such that , we compute a local minimal rule which contains x such that every observation z that satisfies this rule has . See the paper here for more details

  • How to compute the local rule explanations ?
sdp, rules, _, _, _ = acv_xplainer.compute_sdp_maxrules(X, y, data_bground, y_bground, S) # data_bground is the background dataset that is used for the estimation. It should be the training samples.

B. Tree-based explanations

ACV gives Shapley Values explanations for XGBoost, LightGBM, CatBoostClassifier, scikit-learn and pyspark tree models. It provides the following Shapley Values:

  • Classic local Shapley Values (The value function is the conditional expectation )
  • Active Shapley values (Local fidelity and Sparse by design)
  • Swing Shapley Values (The Shapley values are interpretable by design) (Coming soon)

In addition, we use the coalitional version of SV to properly handle categorical variables in the computation of SV.

See the papers here

To explain the tree-based models above, we need to transform our model into ACVTree.

from acv_explainers import ACVTree

forest = XGBClassifier() # or any Tree Based models
#...trained the model

acvtree = ACVTree(forest, data_bground) # data_bground is the background dataset that is used for the estimation. It should be the training samples.

Accurate Shapley Values

sv = acvtree.shap_values(X)

Note that it provides a better estimation of the tree-path dependent of TreeSHAP when the variables are dependent.

Accurate Shapley Values with encoded categorical variables

Let assume we have a categorical variable Y with k modalities that we encoded by introducing the dummy variables . As shown in the paper, we must take the coalition of the dummy variables to correctly compute the Shapley values.

# cat_index := list[list[int]] that contains the column indices of the dummies or one-hot variables grouped 
# together for each variable. For example, if we have only 2 categorical variables Y, Z 
# transformed into [Y_0, Y_1, Y_2] and [Z_0, Z_1, Z_2]

cat_index = [[0, 1, 2], [3, 4, 5]]
forest_sv = acvtree.shap_values(X, C=cat_index)

In addition, we can compute the SV given any coalitions. For example, let assume we have 10 variables and we want the following coalition

coalition = [[0, 1, 2], [3, 4], [5, 6]]
forest_sv = acvtree.shap_values(X, C=coalition)

How to compute for tree-based classifier ?

Recall that the is the probability that the prediction remains the same when we fixed variables given the subset S.

sdp = acvtree.compute_sdp_clf(X, S, data_bground) # data_bground is the background dataset that is used for the estimation. It should be the training samples.

How to compute the Sufficient Coalition and the Global SDP importance for tree-based classifier ?

Recall that the Minimal Sufficient Explanations is the Minimal Subset S such that fixing the values permit to maintain the prediction with high probability .

sdp_importance, sdp_index, size, sdp = acvtree.importance_sdp_clf(X, data_bground) # data_bground is the background dataset that is used for the estimation. It should be the training samples.

Active Shapley values

The Active Shapley values is a SV based on a new game defined in the Paper (Accurate and robust Shapley Values for explaining predictions and focusing on local important variables such that null (non-important) variables has zero SV and the "payout" is fairly distribute among active variables.

  • How to compute Active Shapley values ?
import acv_explainers

# First, we need to compute the Active and Null coalition
sdp_importance, sdp_index, size, sdp = acvtree.importance_sdp_clf(X, data_bground)
S_star, N_star = acv_explainers.utils.get_active_null_coalition_list(sdp_index, size)

# Then, we used the active coalition found to compute the Active Shapley values.
forest_asv_adap = acvtree.shap_values_acv_adap(X, C, S_star, N_star, size)
Remarks for tree-based explanations:

If you don't want to use multi-threaded (due to scaling or memory problem), you have to add "_nopa" to each function (e.g. compute_sdp_clf ==> compute_sdp_clf_nopa). You can also compute the different values needed in cache by setting cache=True in ACVTree initialization e.g. ACVTree(model, data_bground, cache=True).

Examples and tutorials (a lot more to come...)

We can find a tutorial of the usages of ACV in demo_acv and the notebooks below demonstrate different use cases for ACV. Look inside the notebook directory of the repository if you want to try playing with the original notebooks yourself.

Comments
  • acvtree.global_sdp_importance_clf error with LightGBM, but not RandomForest

    acvtree.global_sdp_importance_clf error with LightGBM, but not RandomForest

    Hello,

    First of all, kudos for this lib, it's amazing how many models you already support (sklearn, skopt, {xgb,cat,light}gbm).

    My test works for RandomForest, with basically the same current performance limitations. Having looked at the code, maybe the C extension (cext_acv) which should speed things up is not yet implemented.

    Basically, the very same run of global_sdp_importance_clf on a subset (due to the performance issue) which works with sklearn RandomForest fails with LightGBM.

    Since the syntax changed a little from the previous lib, I followed one notebook example for the C parameter (maybe I'm wrong there).

    n = 100
    C = [[]]
    # columns = list of features
    # already fitted model of type "lightgbm.sklearn.LGBMClassifier"
    acvtree = ACVTree(model, X_train[:n].values)
    sdp_importance_m, sdp_importance, sdp_importance_proba, sdp_importance_coal_count, sdp_importance_variable_count = acvtree.global_sdp_importance_clf(data=X_test[:n].values[y_test[:n]<1], data_bground=X_train[:n].values, columns_names=columns, global_proba=0.9, decay=0.7, threshold=0.6, proba=0.9,verbose=1,C=C, verbose=0)
    

    leading to this error

    ~/.virtualenvs/venv/lib/python3.8/site-packages/acv_explainers/acv_tree.py in global_sdp_importance_clf(self, data, data_bground, columns_names, global_proba, decay, threshold, proba, C, verbose)
         64                           proba, C, verbose):
         65
    ---> 66         return global_sdp_importance(data, data_bground, columns_names, global_proba, decay, threshold,
         67                           proba, C, verbose, self.compute_sdp_clf, self.predict)
         68
    
    ~/.virtualenvs/venv/lib/python3.8/site-packages/acv_explainers/py_acv.py in global_sdp_importance(data, data_bground, columns_names, global_proba, decay, threshold, proba, C, verbose, cond_func, predict)
        475             fx = predict(np.expand_dims(ind, 0))[0]
        476
    --> 477         local_sdp(ind, fx, threshold, proba, index, data_bground, final_coal, decay,
        478                   C=C, verbose=verbose, cond_func=cond_func)
        479
    
    ~/.virtualenvs/venv/lib/python3.8/site-packages/acv_explainers/py_acv.py in local_sdp(x, f, threshold, proba, index, data, final_coal, decay, C, verbose, cond_func)
        405                 if c not in C_off:
        406
    --> 407                     value = cond_func(x, f, threshold, S=chain_l(c), data=data)
        408                     c_value[size][str(c)] = value
        409
    
    ~/.virtualenvs/venv/lib/python3.8/site-packages/acv_explainers/acv_tree.py in compute_sdp_clf(self, x, fx, tx, S, data)
         37
         38     def compute_sdp_clf(self, x, fx, tx, S, data):
    ---> 39         sdp = cond_sdp_forest_clf(x, fx, tx, self.trees, S, data=data)
         40         return sdp
         41
    
    ~/.virtualenvs/venv/lib/python3.8/site-packages/acv_explainers/py_acv.py in cond_sdp_forest_clf(x, fx, tx, forest, S, data)
        239
        240         s = (mean_forest['all'] - mean_forest['down']) / (mean_forest['up'] - mean_forest['down'])
    --> 241         sdp += 0 * (s[int(fx)] < 0) + 1 * (s[int(fx)] > 1) + s[int(fx)] * (0 <= s[int(fx)] <= 1)
        242     # sdp = 0 * (sdp[int(fx)] < 0) + 1 * (sdp[int(fx)] > 1) + sdp[int(fx)] * (0 <= sdp[int(fx)] <= 1)
        243     return sdp/n_trees
    
    IndexError: index 1 is out of bounds for axis 0 with size 1
    

    BTW since you seem interested in multi-arm bandit, you may find this hyper-parameter search library interesting. It's a multi-armed bandit bayesian optimizer based on the gaussian process.

    Thanks!

    opened by flamby 2
  • ValueError: Buffer dtype mismatch, expected 'long' but got 'long long'

    ValueError: Buffer dtype mismatch, expected 'long' but got 'long long'

    If I try to run the code in the Python Notebook and change it into something a python script can run, the code has the error I wrote in the title when calculating the SDP using compute_sdp_clf. I believe this has something to do with the Cython file, in line 238 something has to be changed, maybe long into long long?

    opened by justinthecoder 1
  • Cheers

    Cheers

    I have no actual issue at the moment but just finished reading the papers and I wanted to offer my praise for your work. It is great stuff.

    I am also very much looking forward to the implementation of Swing Shapley Values for tree-based models.

    I may have some real world tests/comparisons between your methods and classic SHAP results I can at least partially share in a few months .

    Thank you again for sharing your work!

    opened by CanML 0
  • Getting `clang: error: unsupported option '-fopenmp'` when installing with pip on M1 mac

    Getting `clang: error: unsupported option '-fopenmp'` when installing with pip on M1 mac

    Hi!

    I'm eager to try this library out. Unfortunately I get an error upon installation:

    clang: error: unsupported option '-fopenmp'
    
    • I updated llvm using homebrew (did not solve the problem).

    • clang --help | grep fopenmp returns

        -fopenmp                Parse OpenMP pragmas and generate parallel code.
      

    so it's just strange that this argument is not recognized during installation.

    Any idea how to solve this?

    My specs are:

    Apple M1 Pro (2021)
    MacOS 12.5.1
    Python 3.10
    
    opened by ulfaslakprecis 1
  • TypeError: unhashable type: 'list' in compute_local_sdp function

    TypeError: unhashable type: 'list' in compute_local_sdp function

    Hello,

    Thank you for a great package. I've been trying out the code on the front page. I ran into an issue when I was trying to generate the local explanatory importance scores and I wondered if you might be able to help? I got the following error:


    TypeError                                 Traceback (most recent call last)
    Input In [24], in <cell line: 1>()
    ----> 1 lximp = acv_explainer.compute_local_sdp(X_train.shape[1], sufficient_expl)
    
    File ~/.local/lib/python3.9/site-packages/acv_explainers/acv_agnosticX.py:627, in ACXplainer.compute_local_sdp(d, sufficient_coal)
        625 flat = [item for sublist in sufficient_coal for item in sublist]
        626 flat = pd.Series(flat)
    --> 627 flat = dict(flat.value_counts() / len(sufficient_coal))
        628 local_sdp = np.zeros(d)
        629 for key in flat.keys():
    
    TypeError: unhashable type: 'list'
    

    I tried to manually calculate the LEI based on your paper, since it's a just a simple percentage of how many SE in the A-SE a feature appears in, but I also found that the sufficient_expl list has negative values? Do they indicate a feature as well? Worth noting that sometimes the only result I get for the A-SE is -1.

    opened by Mythreyi-V 1
  • Doesn't work with Windows

    Doesn't work with Windows

    I don't know if skranger is required 100%, but there aren't wheels for it, so it looks it can't be installed https://github.com/crflynn/skranger/issues/53. I am uncertain if there is some other way to test it, for now, I'm going to try it with https://github.com/ml-tooling/ml-workspace but not sure how to use it on Windows.

    opened by set92 3
Releases(v1.2.3)
Owner
Salim Amoukou
Salim Amoukou
利用python脚本实现微信、支付宝账单的合并,并保存到excel文件实现自动记账,可查看可视化图表。

KeepAccounts_v2.0 KeepAccounts.exe和其配套表格能够实现微信、支付宝官方导出账单的读取合并,为每笔帐标记类型,并按月份和类型生成可视化图表。再也不用消费一笔记一笔,每月仅需10分钟,记好所有的帐。 作者: MickLife Bilibili: https://spac

159 Jan 01, 2023
Tools for manipulating UVs in the Blender viewport.

UV Tool Suite for Blender A set of tools to make editing UVs easier in Blender. These tools can be accessed wither through the Kitfox - UV panel on th

35 Oct 29, 2022
AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning

AdaShare: Learning What To Share For Efficient Deep Multi-Task Learning (NeurIPS 2020) Introduction AdaShare is a novel and differentiable approach fo

94 Dec 22, 2022
Code for: https://berkeleyautomation.github.io/bags/

DeformableRavens Code for the paper Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks. Here is the

Daniel Seita 121 Dec 30, 2022
Normal Learning in Videos with Attention Prototype Network

Codes_APN Official codes of CVPR21 paper: Normal Learning in Videos with Attention Prototype Network (https://arxiv.org/abs/2108.11055) Overview of ou

11 Dec 13, 2022
[CVPR 2022 Oral] Rethinking Minimal Sufficient Representation in Contrastive Learning

Rethinking Minimal Sufficient Representation in Contrastive Learning PyTorch implementation of Rethinking Minimal Sufficient Representation in Contras

36 Nov 23, 2022
An open-source online reverse dictionary.

An open-source online reverse dictionary.

THUNLP 6.3k Jan 09, 2023
An automated facial recognition based attendance system (desktop application)

Facial_Recognition_based_Attendance_System An automated facial recognition based attendance system (desktop application) Made using Python, Tkinter an

1 Jun 21, 2022
Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021)

Understanding and Improving Encoder Layer Fusion in Sequence-to-Sequence Learning (ICLR 2021) Citation Please cite as: @inproceedings{liu2020understan

Sunbow Liu 22 Nov 25, 2022
Unofficial PyTorch Implementation of "DOLG: Single-Stage Image Retrieval with Deep Orthogonal Fusion of Local and Global Features"

Pytorch Implementation of Deep Orthogonal Fusion of Local and Global Features (DOLG) This is the unofficial PyTorch Implementation of "DOLG: Single-St

DK 96 Jan 06, 2023
Weakly Supervised Segmentation with Tensorflow. Implements instance segmentation as described in Simple Does It: Weakly Supervised Instance and Semantic Segmentation, by Khoreva et al. (CVPR 2017).

Weakly Supervised Segmentation with TensorFlow This repo contains a TensorFlow implementation of weakly supervised instance segmentation as described

Phil Ferriere 220 Dec 13, 2022
Official implementation of our paper "LLA: Loss-aware Label Assignment for Dense Pedestrian Detection" in Pytorch.

LLA: Loss-aware Label Assignment for Dense Pedestrian Detection This project provides an implementation for "LLA: Loss-aware Label Assignment for Dens

35 Dec 06, 2022
Code implementation of Data Efficient Stagewise Knowledge Distillation paper.

Data Efficient Stagewise Knowledge Distillation Table of Contents Data Efficient Stagewise Knowledge Distillation Table of Contents Requirements Image

IvLabs 112 Dec 02, 2022
Pytorch implementation of TailCalibX : Feature Generation for Long-tail Classification

TailCalibX : Feature Generation for Long-tail Classification by Rahul Vigneswaran, Marc T. Law, Vineeth N. Balasubramanian, Makarand Tapaswi [arXiv] [

Rahul Vigneswaran 34 Jan 02, 2023
PyTorch implementation of an end-to-end Handwritten Text Recognition (HTR) system based on attention encoder-decoder networks

AttentionHTR PyTorch implementation of an end-to-end Handwritten Text Recognition (HTR) system based on attention encoder-decoder networks. Scene Text

Dmitrijs Kass 31 Dec 22, 2022
Keras implementation of AdaBound

AdaBound for Keras Keras port of AdaBound Optimizer for PyTorch, from the paper Adaptive Gradient Methods with Dynamic Bound of Learning Rate. Usage A

Somshubra Majumdar 132 Sep 23, 2022
This repository stores the code to reproduce the results published in "TiWS-iForest: Isolation Forest in Weakly Supervised and Tiny ML scenarios"

TinyWeaklyIsolationForest This repository stores the code to reproduce the results published in "TiWS-iForest: Isolation Forest in Weakly Supervised a

2 Mar 21, 2022
Full-featured Decision Trees and Random Forests learner.

CID3 This is a full-featured Decision Trees and Random Forests learner. It can save trees or forests to disk for later use. It is possible to query tr

Alejandro Penate-Diaz 3 Aug 15, 2022
This is an official implementation of "Polarized Self-Attention: Towards High-quality Pixel-wise Regression"

Polarized Self-Attention: Towards High-quality Pixel-wise Regression This is an official implementation of: Huajun Liu, Fuqiang Liu, Xinyi Fan and Don

DeLightCMU 212 Jan 08, 2023
Implementation EfficientDet: Scalable and Efficient Object Detection in PyTorch

Implementation EfficientDet: Scalable and Efficient Object Detection in PyTorch

tonne 1.4k Dec 29, 2022