Awesome Explainable Graph Reasoning
A collection of research papers and software related to explainability in graph machine learning.
Contents
License
A collection of research papers and software related to explainability in graph machine learning.
License
Hi all, I've added a new reference to a paper of mine related to counterfactual explanations for molecule predictions. I hope this is appreciated :)
Link to paper: https://arxiv.org/abs/2104.08060
You might want to double check this commit is ok - I added a new sub-heading called concept based methods which was not covered by the survey paper the rest of the approaches are categorised into.
Two papers on rule-based reasoning:
And one application note on a web application for visualizing predictions and their explanations using made my the approaches above:
The work 'Evaluating Attribution for Graph Neural Networks' is particularly useful because of its approach as a benchmarking. It comprises several attribution techniques and GNN architectures.
Hi, I have been impressed about how fast is this field growing. As I continue reading and learning, I will contribute with papers to make this list even better.
In particular, @flyingdoog is maintaining a list with the papers (grouped by year) at https://github.com/flyingdoog/awesome-graph-explainability-papers that can be interesting to review
Anchor This repository has code for the paper High-Precision Model-Agnostic Explanations. An anchor explanation is a rule that sufficiently βanchorsβ
Documentation | External Resources | Research Paper Shapley is a Python library for evaluating binary classifiers in a machine learning ensemble. The
Alibi is an open source Python library aimed at machine learning model inspection and interpretation. The focus of the library is to provide high-qual
Lucid Lucid is a collection of infrastructure and tools for research in neural network interpretability. We're not currently supporting tensorflow 2!
SHAP (SHapley Additive exPlanations) is a game theoretic approach to explain the output of any machine learning model. It connects optimal credit allo
Hierarchical neural-net interpretations (ACD) π§ Produces hierarchical interpretations for a single prediction made by a pytorch neural network. Offic
ππ¦ Xplique is a Python toolkit dedicated to explainability, currently based on Tensorflow.
AI Explainability 360 (v0.2.1) The AI Explainability 360 toolkit is an open-source library that supports interpretability and explainability of datase
Visualization Toolbox for Long Short Term Memory networks (LSTMs)
A ultra-lightweight 3D renderer of the Tensorflow/Keras neural network architectures
L2X Code for replicating the experiments in the paper Learning to Explain: An Information-Theoretic Perspective on Model Interpretation at ICML 2018,
Dream-Creator This project aims to simplify the process of creating a custom DeepDream model by using pretrained GoogleNet models and custom image dat
Tool for visualizing attention in the Transformer model (BERT, GPT-2, Albert, XLNet, RoBERTa, CTRL, etc.)
π€ͺ TensorFlowTTS provides real-time state-of-the-art speech synthesis architectures such as Tacotron-2, Melgan, Multiband-Melgan, FastSpeech, FastSpeech2 based-on TensorFlow 2. With Tensorflow 2, we c
lime This project is about explaining what machine learning classifiers (or models) are doing. At the moment, we support explaining individual predict
======== FairML: Auditing Black-Box Predictive Models FairML is a python toolbox auditing the machine learning models for bias. Description Predictive
Summary Explorer is a tool to visually explore the state-of-the-art in text summarization.
Quiver Interactive convnet features visualization for Keras The quiver workflow Video Demo Build your model in keras model = Model(...) Launch the vis
Soft-Decision-Tree Soft-Decision-Tree is the pytorch implementation of Distilling a Neural Network Into a Soft Decision Tree, paper recently published
MapExtrackt Convolutional Neural Networks Are Beautiful We all take our eyes for granted, we glance at an object for an instant and our brains can ide