Code to reproduce experiments in the paper "Explainability Requires Interactivity".

Overview

Explainability Requires Interactivity

This repository contains the code to train all custom models used in the paper Explainability Requires Interactivity as well as to create all static explanations (heat maps and generative). For our interactive framework, see the sister repositor.

Precomputed generative explanations are located at static_generative_explanations.

Requirements

Install the conda environment via conda env create -f env.yml (depending on your system you might need to change some versions, e.g. for pytorch, cudatoolkit and pytorch-lightning).

For some parts you will need the FairFace model, which can be downloaded from the authors' repo. You will only need the res34_fair_align_multi_7_20190809.pt file.

Training classification networks

CelebA dataset

You first need to download and decompress the CelebAMask-HQ dataset (or here). Then run the training with

python train.py --dset celeb --dset_path /PATH/TO/CelebAMask-HQ/ --classes_or_attr Smiling --target_path /PATH/TO/OUTPUT

/PATH/TO/FLOWERS102/ should contain a CelebAMask-HQ-attribute-anno.txt file and an CelebA-HQ-img directory. Any of the columns in CelebAMask-HQ-attribute-anno.txt can be used; in the paper we used Heavy_Makeup, Male, Smiling, and Young.

Flowers102 dataset

You first need to download and decompress the Flowers102 data. Then run the training with

python train.py --dset flowers102 --dset_path /PATH/TO/FLOWERS102/ --classes_or_attr 49-65 --target_path /PATH/TO/OUTPUT/

/PATH/TO/FLOWERS102/ should contain an imagelabels.mat file and an images directory. Classes 49 and 65 correspond to the "Oxeye daisy" and "California poppy", while 63 and 54 correspond to "Black-eyed Susan" and "Sunflower" as in the paper.

Generating heatmap explanations

Heatmap explanations are generated using the Captum library. After training, run explanations via

python static_exp.py --model_path /PATH/TO/MODEL.pt --img_path /PATH/TO/IMGS/ --model_name celeb --fig_dir /PATH/TO/OUTPUT/

/PATH/TO/IMGS/ contains (only) image files and can be omitted in order to run the default images exported by train.py. To run on FairFace, choose --model_name fairface and add --attr age or --attr gender. Other explanation methods can be easily added by modifying the explain_all function in static_exp.py. Explanations are saved to fig_dir. Only tested for the networks trained on the facial images data in the previous step, but any resnet18 with scalar output layer should work just as well.

Generating generative explanations

First, clone the original NVIDIA StyleGAN2-ada-pytorch repo. Make sure everything works as expected (e.g. run the getting started code). If the code is stuck at loading TODO, usually ctrl-C will let the model fall back to a smaller reference implementation which is good enough for our use case. Next, export the repo into your PYTHONPATH (e.g. via export PYTHONPATH=$PYTHONPATH:/PATH/TO/stylegan2-ada-pytorch/). To generate explanations, you will need to 0) train an image model (see above, or use the FairFace model); 1) create a dataset of latent codes + labels; 2) train a latent space logistic regression models; and 3) create the explanations. As each of the steps can be very slow, we split them up

Create labeled latent dataset

First, make sure to either train at least one image model as in the first step and/or download the FairFace model.

python generative_exp.py --phase 1 --attrs Smiling,ff-skin-color --base_dir /PATH/TO/BASE/ --generator_path /PATH/TO/STYLEGAN2.pkl --n_train 20000 --n_valid 5000

The base_dir is the directory where all files/sub-directories are stored and should be the same as the target_path from train.py (e.g., just .). It should contain e.g. the celeb-Smiling directory and the res34_fair_align_multi_7_20190809.pt file if using --attrs Smiling,ff-skin-color.

Train latent space model

After the first step, run

python generative_exp.py --phase 2 --attrs Smiling,ff-skin-color --base_dir /PATH/TO/BASE/ --epochs 50

with same base_dir and attrs.

Create generative explanations

Finally, you can generate generative explanations via

python generative_exp.py --phase 3 --base_dir /PATH/TO/BASE/ --eval_attr Smiling --generator_path /PATH/TO/STYLEGAN2.pkl --attrs Smiling,ff-skin-color --reconstruction_steps 1000 --ampl 0.09 --input_img_dir /PATH/TO/IMAGES/ --output_dir /PATH/TO/OUTPUT/

Here, eval_attr is the final evaluation model's class that you want to explain; attrs are the same as before, the directions in latent space; input_img_dir is a directory with (only) image files that are to be explained. Explanations are saved to output_dir.

Owner
Digital Health & Machine Learning
Digital Health & Machine Learning
This repo is about to create the Streamlit application for given ML model.

HR-Attritiion-using-Streamlit This repo is about to create the Streamlit application for given ML model. Problem Statement: Managing peoples at workpl

Pavan Giri 0 Dec 10, 2021
Constrained Language Models Yield Few-Shot Semantic Parsers

Constrained Language Models Yield Few-Shot Semantic Parsers This repository contains tools and instructions for reproducing the experiments in the pap

Microsoft 43 Nov 23, 2022
Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations. [2021]

Revisiting Contrastive Methods for Unsupervised Learning of Visual Representations This repo contains the Pytorch implementation of our paper: Revisit

Wouter Van Gansbeke 80 Nov 20, 2022
Official PyTorch implementation of PICCOLO: Point-Cloud Centric Omnidirectional Localization (ICCV 2021)

Official PyTorch implementation of PICCOLO: Point-Cloud Centric Omnidirectional Localization (ICCV 2021)

16 Nov 19, 2022
a practicable framework used in Deep Learning. So far UDL only provide DCFNet implementation for the ICCV paper (Dynamic Cross Feature Fusion for Remote Sensing Pansharpening)

UDL UDL is a practicable framework used in Deep Learning (computer vision). Benchmark codes, results and models are available in UDL, please contact @

Xiao Wu 11 Sep 30, 2022
Python implementation of cover trees, near-drop-in replacement for scipy.spatial.kdtree

This is a Python implementation of cover trees, a data structure for finding nearest neighbors in a general metric space (e.g., a 3D box with periodic

Patrick Varilly 28 Nov 25, 2022
[ICCV' 21] "Unsupervised Point Cloud Pre-training via Occlusion Completion"

OcCo: Unsupervised Point Cloud Pre-training via Occlusion Completion This repository is the official implementation of paper: "Unsupervised Point Clou

Hanchen 204 Dec 24, 2022
Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic Segmentation (CVPR 2021)

Anti-Adversarially Manipulated Attributions for Weakly and Semi-Supervised Semantic Segmentation Input Image Initial CAM Successive Maps with adversar

Jungbeom Lee 110 Dec 07, 2022
Earthquake detection via fiber optic cables using deep learning

Earthquake detection via fiber optic cables using deep learning Author: Fantine Huot Getting started Update the submodules After cloning the repositor

Fantine 4 Nov 30, 2022
Adversarial-autoencoders - Tensorflow implementation of Adversarial Autoencoders

Adversarial Autoencoders (AAE) Tensorflow implementation of Adversarial Autoencoders (ICLR 2016) Similar to variational autoencoder (VAE), AAE imposes

Qian Ge 236 Nov 13, 2022
CVPR 2022 "Online Convolutional Re-parameterization"

OREPA: Online Convolutional Re-parameterization This repo is the PyTorch implementation of our paper to appear in CVPR2022 on "Online Convolutional Re

Mu Hu 121 Dec 21, 2022
Evaluating deep transfer learning for whole-brain cognitive decoding

Evaluating deep transfer learning for whole-brain cognitive decoding This README file contains the following sections: Project description Repository

Armin Thomas 5 Oct 31, 2022
95.47% on CIFAR10 with PyTorch

Train CIFAR10 with PyTorch I'm playing with PyTorch on the CIFAR10 dataset. Prerequisites Python 3.6+ PyTorch 1.0+ Training # Start training with: py

5k Dec 30, 2022
[ICLR2021] Unlearnable Examples: Making Personal Data Unexploitable

Unlearnable Examples Code for ICLR2021 Spotlight Paper "Unlearnable Examples: Making Personal Data Unexploitable " by Hanxun Huang, Xingjun Ma, Sarah

Hanxun Huang 98 Dec 07, 2022
Distilling Motion Planner Augmented Policies into Visual Control Policies for Robot Manipulation (CoRL 2021)

Distilling Motion Planner Augmented Policies into Visual Control Policies for Robot Manipulation [Project website] [Paper] This project is a PyTorch i

Cognitive Learning for Vision and Robotics (CLVR) lab @ USC 6 Feb 28, 2022
A benchmark dataset for emulating atmospheric radiative transfer in weather and climate models with machine learning (NeurIPS 2021 Datasets and Benchmarks Track)

ClimART - A Benchmark Dataset for Emulating Atmospheric Radiative Transfer in Weather and Climate Models Official PyTorch Implementation Using deep le

21 Dec 31, 2022
Submission to Twitter's algorithmic bias bounty challenge

Twitter Ethics Challenge: Pixel Perfect Submission to Twitter's algorithmic bias bounty challenge, by Travis Hoppe (@metasemantic). Abstract We build

Travis Hoppe 4 Aug 19, 2022
CONditionals for Ordinal Regression and classification in PyTorch

CONDOR pytorch implementation for ordinal regression with deep neural networks. Documentation: https://GarrettJenkinson.github.io/condor_pytorch About

7 Jul 25, 2022
Official code of our work, Unified Pre-training for Program Understanding and Generation [NAACL 2021].

PLBART Code pre-release of our work, Unified Pre-training for Program Understanding and Generation accepted at NAACL 2021. Note. A detailed documentat

Wasi Ahmad 138 Dec 30, 2022
Process JSON files for neural recording sessions using Medtronic's BrainSense Percept PC neurostimulator

percept_processing This code processes JSON files for streamed neural data using Medtronic's Percept PC neurostimulator with BrainSense Technology for

Maria Olaru 3 Jun 06, 2022