XAI - An eXplainability toolbox for machine learning

Overview

GitHub GitHub GitHub GitHub

XAI - An eXplainability toolbox for machine learning

XAI is a Machine Learning library that is designed with AI explainability in its core. XAI contains various tools that enable for analysis and evaluation of data and models. The XAI library is maintained by The Institute for Ethical AI & ML, and it was developed based on the 8 principles for Responsible Machine Learning.

You can find the documentation at https://ethicalml.github.io/xai/index.html. You can also check out our talk at Tensorflow London where the idea was first conceived - the talk also contains an insight on the definitions and principles in this library.

YouTube video showing how to use XAI to mitigate undesired biases

This video of the talk presented at the PyData London 2019 Conference which provides an overview on the motivations for machine learning explainability as well as techniques to introduce explainability and mitigate undesired biases using the XAI Library.
Do you want to learn about more awesome machine learning explainability tools? Check out our community-built "Awesome Machine Learning Production & Operations" list which contains an extensive list of tools for explainability, privacy, orchestration and beyond.

0.1.0

If you want to see a fully functional demo in action clone this repo and run the Example Jupyter Notebook in the Examples folder.

What do we mean by eXplainable AI?

We see the challenge of explainability as more than just an algorithmic challenge, which requires a combination of data science best practices with domain-specific knowledge. The XAI library is designed to empower machine learning engineers and relevant domain experts to analyse the end-to-end solution and identify discrepancies that may result in sub-optimal performance relative to the objectives required. More broadly, the XAI library is designed using the 3-steps of explainable machine learning, which involve 1) data analysis, 2) model evaluation, and 3) production monitoring.

We provide a visual overview of these three steps mentioned above in this diagram:

XAI Quickstart

Installation

The XAI package is on PyPI. To install you can run:

pip install xai

Alternatively you can install from source by cloning the repo and running:

python setup.py install 

Usage

You can find example usage in the examples folder.

1) Data Analysis

With XAI you can identify imbalances in the data. For this, we will load the census dataset from the XAI library.

import xai.data
df = xai.data.load_census()
df.head()

View class imbalances for all categories of one column

ims = xai.imbalance_plot(df, "gender")

View imbalances for all categories across multiple columns

im = xai.imbalance_plot(df, "gender", "loan")

Balance classes using upsampling and/or downsampling

bal_df = xai.balance(df, "gender", "loan", upsample=0.8)

Perform custom operations on groups

groups = xai.group_by_columns(df, ["gender", "loan"])
for group, group_df in groups:    
    print(group) 
    print(group_df["loan"].head(), "\n")

Visualise correlations as a matrix

_ = xai.correlations(df, include_categorical=True, plot_type="matrix")

Visualise correlations as a hierarchical dendogram

_ = xai.correlations(df, include_categorical=True)

Create a balanced validation and training split dataset

# Balanced train-test split with minimum 300 examples of 
#     the cross of the target y and the column gender
x_train, y_train, x_test, y_test, train_idx, test_idx = \
    xai.balanced_train_test_split(
            x, y, "gender", 
            min_per_group=300,
            max_per_group=300,
            categorical_cols=categorical_cols)

x_train_display = bal_df[train_idx]
x_test_display = bal_df[test_idx]

print("Total number of examples: ", x_test.shape[0])

df_test = x_test_display.copy()
df_test["loan"] = y_test

_= xai.imbalance_plot(df_test, "gender", "loan", categorical_cols=categorical_cols)

2) Model Evaluation

We are able to also analyse the interaction between inference results and input features. For this, we will train a single layer deep learning model.

= 0.5).astype(int).T[0]) ">
model = build_model(proc_df.drop("loan", axis=1))

model.fit(f_in(x_train), y_train, epochs=50, batch_size=512)

probabilities = model.predict(f_in(x_test))
predictions = list((probabilities >= 0.5).astype(int).T[0])

Visualise permutation feature importance

def get_avg(x, y):
    return model.evaluate(f_in(x), y, verbose=0)[1]

imp = xai.feature_importance(x_test, y_test, get_avg)

imp.head()

Identify metric imbalances against all test data

_= xai.metrics_plot(
        y_test, 
        probabilities)

Identify metric imbalances across a specific column

_ = xai.metrics_plot(
    y_test, 
    probabilities, 
    df=x_test_display, 
    cross_cols=["gender"],
    categorical_cols=categorical_cols)

Identify metric imbalances across multiple columns

_ = xai.metrics_plot(
    y_test, 
    probabilities, 
    df=x_test_display, 
    cross_cols=["gender", "ethnicity"],
    categorical_cols=categorical_cols)

Draw confusion matrix

xai.confusion_matrix_plot(y_test, pred)

Visualise the ROC curve against all test data

_ = xai.roc_plot(y_test, probabilities)

Visualise the ROC curves grouped by a protected column

protected = ["gender", "ethnicity", "age"]
_ = [xai.roc_plot(
    y_test, 
    probabilities, 
    df=x_test_display, 
    cross_cols=[p],
    categorical_cols=categorical_cols) for p in protected]

Visualise accuracy grouped by probability buckets

d = xai.smile_imbalance(
    y_test, 
    probabilities)

Visualise statistical metrics grouped by probability buckets

d = xai.smile_imbalance(
    y_test, 
    probabilities,
    display_breakdown=True)

Visualise benefits of adding manual review on probability thresholds

d = xai.smile_imbalance(
    y_test, 
    probabilities,
    bins=9,
    threshold=0.75,
    manual_review=0.375,
    display_breakdown=False)

Comments
  • matplotlib error while installing package

    matplotlib error while installing package

    Collecting matplotlib==3.0.2

    Using cached matplotlib-3.0.2.tar.gz (36.5 MB) ERROR: Command errored out with exit status 1: command: /opt/anaconda3/envs/ethicalml/bin/python -c 'import sys, setuptools, tokenize; sys.argv[0] = '"'"'/private/var/folders/wv/m62_p54d5bx1dnq_m07ck3l40000gn/T/pip-install-303disb7/matplotlib/setup.py'"'"'; file='"'"'/private/var/folders/wv/m62_p54d5bx1dnq_m07ck3l40000gn/T/pip-install-303disb7/matplotlib/setup.py'"'"';f=getattr(tokenize, '"'"'open'"'"', open)(file);code=f.read().replace('"'"'\r\n'"'"', '"'"'\n'"'"');f.close();exec(compile(code, file, '"'"'exec'"'"'))' egg_info --egg-base /private/var/folders/wv/m62_p54d5bx1dnq_m07ck3l40000gn/T/pip-install-303disb7/matplotlib/pip-egg-info

    opened by ArpitSisodia 3
  • Requirements

    Requirements

    your requirements are very restrictive. Can you please change it to >= instead of ==. for example:

    numpy>=1.3
    pandas>=0.23.0
    matplotlib>2.02,<=3.0.3
    scikit-learn>=0.19.0
    
    opened by idanmoradarthas 3
  • converters the probs into np array if its already not

    converters the probs into np array if its already not

    smile_imbalance() funciton argument "probs" does not specify that it is required to be numpy array, but it does so i have added that data type in the argument letting the user know if he/she is to refer to the docs and i have also added a line np.array() which is an idempotent operation(if the array passed is already numpy array then it does nothing but if its not it changes the list into numpy array)

    Suggestion

    • If possible can you guys consider adding "save_plot_path" method to each function, so that when this package is used in production (which i am and people considering Continuous model delivery would use) all these plots could be saved to a particular directory for data scientists to look at later since in production, code would be used in scripts running on EC2 or other cloud servers and not on jupyter notebooks
    • My use case is I am retraining the model every week and XAI allows me to generate a evaluation report allowing me to remotely decide weather to push this weeks mode into production
    • I considered adding it myself but i was not sure if this is the direction you guys wanted to take

    Thank you

    opened by sai-krishna-msk 2
  • Unable to install package

    Unable to install package

    Hello!

    I've been trying to install this package and am unable to do so. I've tried both methods on my Ubuntu machine.

    1. pip install xai
    2. python setup.py install

    What can I do to install this? Also, is this project active anymore at all?

    opened by varunbanda 2
  • Can we explain BERT models using this package?

    Can we explain BERT models using this package?

    I'm working with text data and looking for ways to explain BERT models. Is there any workaround using XAI or any other package/resources if anyone can recommend?

    opened by techwithshadab 1
  • Add a conda install option for `xai`

    Add a conda install option for `xai`

    A conda installation option could be very helpful. I have already started working on this, to add xai to conda-forge.

    Conda-forge PR:

    • https://github.com/conda-forge/staged-recipes/pull/17601

    Once the conda-forge PR is merged, you will be able to install the library with conda as follows:

    conda install -c conda-forge xai
    

    :bulb: I will push a PR to update the docs once the package is available on conda-forge.

    opened by sugatoray 0
  • Wrong series returned from _curve

    Wrong series returned from _curve

    There is some bug in https://github.com/EthicalML/xai/blob/master/xai/init.py#L962

    it was written as

    r1s = r2s = []
    

    but should be instead

    r1s, r2s = [], []
    

    The impact is that if the user would like to us r1s and r2s returned to construct the the curve (e.g. for storing the data for later analysis), they would find that r1s and r2s are referring to the same instance which stores all the curve data that should have been separately stored in r1s and r2s

    opened by chen0040 1
Releases(v0.1.0)
Owner
The Institute for Ethical Machine Learning
The Institute for Ethical Machine Learning is a think-tank that brings together with technology leaders, policymakers & academics to develop standards for ML.
The Institute for Ethical Machine Learning
PySpark ML Bank Churn Prediction

PySpark-Bank-Churn Surname: corresponds to the record (row) number and has no effect on the output. CreditScore: contains random values and has no eff

kemalgunay 2 Nov 11, 2021
Time series changepoint detection

changepy Changepoint detection in time series in pure python Install pip install changepy Examples from changepy import pelt from cha

Rui Gil 92 Nov 08, 2022
Automated Machine Learning with scikit-learn

auto-sklearn auto-sklearn is an automated machine learning toolkit and a drop-in replacement for a scikit-learn estimator. Find the documentation here

AutoML-Freiburg-Hannover 6.7k Jan 07, 2023
Automatically create Faiss knn indices with the most optimal similarity search parameters.

It selects the best indexing parameters to achieve the highest recalls given memory and query speed constraints.

Criteo 419 Jan 01, 2023
Python Research Framework

Python Research Framework

EleutherAI 106 Dec 13, 2022
scikit-learn is a python module for machine learning built on top of numpy / scipy

About scikit-learn is a python module for machine learning built on top of numpy / scipy. The purpose of the scikit-learn-tutorial subproject is to le

Gael Varoquaux 122 Dec 12, 2022
A collection of video resources for machine learning

Machine Learning Videos This is a collection of recorded talks at machine learning conferences, workshops, seminars, summer schools, and miscellaneous

Dustin Tran 1.5k Dec 29, 2022
Turns your machine learning code into microservices with web API, interactive GUI, and more.

Turns your machine learning code into microservices with web API, interactive GUI, and more.

Machine Learning Tooling 2.8k Jan 02, 2023
InfiniteBoost: building infinite ensembles with gradient descent

InfiniteBoost Code for a paper InfiniteBoost: building infinite ensembles with gradient descent (arXiv:1706.01109). A. Rogozhnikov, T. Likhomanenko De

Alex Rogozhnikov 183 Jan 03, 2023
Multiple Linear Regression using the LinearRegression class from sklearn.linear_model library

Multiple-Linear-Regression-master - A python program to implement Multiple Linear Regression using the LinearRegression class from sklearn.linear model library

Kushal Shingote 1 Feb 06, 2022
LinearRegression2 Tvads and CarSales

LinearRegression2_Tvads_and_CarSales This project infers the insight that how the TV ads for cars and car Sales are being linked with each other. It i

Ashish Kumar Yadav 1 Dec 29, 2021
MLR - Machine Learning Research

Machine Learning Research 1. Project Topic 1.1. Exsiting research Benmark: https://paperswithcode.com/sota ACL anthology for NLP papers: http://www.ac

Charles 69 Oct 20, 2022
🚪✊Knock Knock: Get notified when your training ends with only two additional lines of code

Knock Knock A small library to get a notification when your training is complete or when it crashes during the process with two additional lines of co

Hugging Face 2.5k Jan 07, 2023
A scikit-learn based module for multi-label et. al. classification

scikit-multilearn scikit-multilearn is a Python module capable of performing multi-label learning tasks. It is built on-top of various scientific Pyth

802 Jan 01, 2023
Laporan Proyek Machine Learning - Azhar Rizki Zulma

Laporan Proyek Machine Learning - Azhar Rizki Zulma Project Overview Domain proyek yang dipilih dalam proyek machine learning ini adalah mengenai hibu

Azhar Rizki Zulma 6 Mar 12, 2022
Anomaly Detection and Correlation library

luminol Overview Luminol is a light weight python library for time series data analysis. The two major functionalities it supports are anomaly detecti

LinkedIn 1.1k Jan 01, 2023
Spark development environment for k8s

Local Spark Dev Env with Docker Development environment for k8s. Using the spark-operator image to ensure it will be the same environment. Start conta

Otacilio Filho 18 Jan 04, 2022
Relevance Vector Machine implementation using the scikit-learn API.

scikit-rvm scikit-rvm is a Python module implementing the Relevance Vector Machine (RVM) machine learning technique using the scikit-learn API. Quicks

James Ritchie 204 Nov 18, 2022
Upgini : data search library for your machine learning pipelines

Automated data search library for your machine learning pipelines → find & deliver relevant external data & features to boost ML accuracy :chart_with_upwards_trend:

Upgini 175 Jan 08, 2023
scikit-multimodallearn is a Python package implementing algorithms multimodal data.

scikit-multimodallearn is a Python package implementing algorithms multimodal data. It is compatible with scikit-learn, a popul

12 Jun 29, 2022