Pytorch implementations of Bayes By Backprop, MC Dropout, SGLD, the Local Reparametrization Trick, KF-Laplace, SG-HMC and more

Overview

Bayesian Neural Networks

License: MIT Python 2.7+ Pytorch 1.0

Pytorch implementations for the following approximate inference methods:

We also provide code for:

Prerequisites

  • PyTorch
  • Numpy
  • Matplotlib

The project is written in python 2.7 and Pytorch 1.0.1. If CUDA is available, it will be used automatically. The models can also run on CPU as they are not excessively big.

Usage

Structure

Regression experiments

We carried out homoscedastic and heteroscedastic regression experiements on toy datasets, generated with (Gaussian Process ground truth), as well as on real data (six UCI datasets).

Notebooks/classification/(ModelName)_(ExperimentType).ipynb: Contains experiments using (ModelName) on (ExperimentType), i.e. homoscedastic/heteroscedastic. The heteroscedastic notebooks contain both toy and UCI dataset experiments for a given (ModelName).

We also provide Google Colab notebooks. This means that you can run on a GPU (for free!). No modifications required - all dependencies and datasets are added from within the notebooks - except for selecting Runtime -> Change runtime type -> Hardware accelerator -> GPU.

MNIST classification experiments

train_(ModelName)_(Dataset).py: Trains (ModelName) on (Dataset). Training metrics and model weights will be saved to the specified directories.

src/: General utilities and model definitions.

Notebooks/classification: An asortment of notebooks which allow for model training, evaluation and running of digit rotation uncertainty experiments. They also allow for weight distribution plotting and weight pruning. They allow for loading of pre-trained models for experimentation.

Bayes by Backprop (BBP)

(https://arxiv.org/abs/1505.05424)

Colab notebooks with regression models: BBP homoscedastic / heteroscedastic

Train a model on MNIST:

python train_BayesByBackprop_MNIST.py [--model [MODEL]] [--prior_sig [PRIOR_SIG]] [--epochs [EPOCHS]] [--lr [LR]] [--n_samples [N_SAMPLES]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

For an explanation of the script's arguments:

python train_BayesByBackprop_MNIST.py -h

Best results are obtained with a Laplace prior.

Local Reparametrisation Trick

(https://arxiv.org/abs/1506.02557)

Bayes By Backprop inference where the mean and variance of activations are calculated in closed form. Activations are sampled instead of weights. This makes the variance of the Monte Carlo ELBO estimator scale as 1/M, where M is the minibatch size. Sampling weights scales (M-1)/M. The KL divergence between gaussians can also be computed in closed form, further reducing variance. Computation of each epoch is faster and so is convergence.

Train a model on MNIST:

python train_BayesByBackprop_MNIST.py --model Local_Reparam [--prior_sig [PRIOR_SIG]] [--epochs [EPOCHS]] [--lr [LR]] [--n_samples [N_SAMPLES]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

MC Dropout

(https://arxiv.org/abs/1506.02142)

A fixed dropout rate of 0.5 is set.

Colab notebooks with regression models: MC Dropout homoscedastic heteroscedastic

Train a model on MNIST:

python train_MCDropout_MNIST.py [--weight_decay [WEIGHT_DECAY]] [--epochs [EPOCHS]] [--lr [LR]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

For an explanation of the script's arguments:

python train_MCDropout_MNIST.py -h

Stochastic Gradient Langevin Dynamics (SGLD)

(https://www.ics.uci.edu/~welling/publications/papers/stoclangevin_v6.pdf)

In order to converge to the true posterior over w, the learning rate should be annealed according to the Robbins-Monro conditions. In practise, we use a fixed learning rate.

Colab notebooks with regression models: SGLD homoscedastic / heteroscedastic

Train a model on MNIST:

python train_SGLD_MNIST.py [--use_preconditioning [USE_PRECONDITIONING]] [--prior_sig [PRIOR_SIG]] [--epochs [EPOCHS]] [--lr [LR]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

For an explanation of the script's arguments:

python train_SGLD_MNIST.py -h

pSGLD

(https://arxiv.org/abs/1512.07666)

SGLD with RMSprop preconditioning. A higher learning rate should be used than for vanilla SGLD.

Train a model on MNIST:

python train_SGLD_MNIST.py --use_preconditioning True [--prior_sig [PRIOR_SIG]] [--epochs [EPOCHS]] [--lr [LR]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

Bootstrap MAP Ensemble

Multiple networks are trained on subsamples of the dataset.

Colab notebooks with regression models: MAP Ensemble homoscedastic / heteroscedastic

Train an ensemble on MNIST:

python train_Bootrap_Ensemble_MNIST.py [--weight_decay [WEIGHT_DECAY]] [--subsample [SUBSAMPLE]] [--n_nets [N_NETS]] [--epochs [EPOCHS]] [--lr [LR]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

For an explanation of the script's arguments:

python train_Bootrap_Ensemble_MNIST.py -h

Kronecker-Factorised Laplace

(https://openreview.net/pdf?id=Skdvd2xAZ)

Train a MAP network and then calculate a second order taylor series aproxiamtion to the curvature around a mode of the posterior. A block diagonal Hessian approximation is used, where only intra-layer dependencies are accounted for. The Hessian is further approximated as the kronecker product of the expectation of a single datapoint's Hessian factors. Approximating the Hessian can take a while. Fortunately it only needs to be done once.

Train a MAP network on MNIST and approximate Hessian:

python train_KFLaplace_MNIST.py [--weight_decay [WEIGHT_DECAY]] [--hessian_diag_sig [HESSIAN_DIAG_SIG]] [--epochs [EPOCHS]] [--lr [LR]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

For an explanation of the script's arguments:

python train_KFLaplace_MNIST.py -h

Note that we save the unscaled and uninverted Hessian factors. This will allow for computationally cheap changes to the prior at inference time as the Hessian will not need to be re-computed. Inference will require inverting the approximated Hessian factors and sampling from a matrix normal distribution. This is shown in notebooks/KFAC_Laplace_MNIST.ipynb

Stochastic Gradient Hamiltonian Monte Carlo

(https://arxiv.org/abs/1402.4102)

We implement the scale-adapted version of this algorithm, proposed here to find hyperparameters automatically during burn-in. We place a Gaussian prior over network weights and a Gamma hyperprior over the Gaussian's precision.

Run SG-HMC-SA burn in and sampler, saving weights in specified file.

python train_SGHMC_MNIST.py [--epochs [EPOCHS]] [--sample_freq [SAMPLE_FREQ]] [--burn_in [BURN_IN]] [--lr [LR]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

For an explanation of the script's arguments:

python train_SGHMC_MNIST.py -h

Approximate Inference in Neural Networks

Map inference provides a point estimate of parameter values. When provided with out of distribution inputs, such as rotated digits, these models then to make wrong predictions with high confidence.

Uncertainty Decomposition

We can measure uncertainty in our models' predictions through predictive entropy. We can decompose this term in order to distinguish between 2 types of uncertainty. Uncertainty caused by noise in the data, or Aleatoric uncertainty, can be quantified as the expected entropy of model predictions. Model uncertainty or Epistemic uncertainty can be measured as the difference between total entropy and aleatoric entropy.

Results

Homoscedastic Regression

Toy homoscedastic regression task. Data is generated by a GP with a RBF kernel (l = 1, σn = 0.3). We use a single-output FC network with one hidden layer of 200 ReLU units to predict the regression mean μ(x). A fixed log σ is learnt separately.

Heteroscedastic Regression

Same scenario as previous section but log σ(x) is predicted from the input.

Toy heteroscedastic regression task. Data is generated by a GP with a RBF kernel (l = 1 σn = 0.3 · |x + 2|). We use a two-head network with 200 ReLU units to predict the regression mean μ(x) and log-standard deviation log σ(x).

Regression on UCI datasets

We performed heteroscedastic regression on the six UCI datasets (housing, concrete, energy efficiency, power plant, red wine and yacht datasets), using 10-foild cross validation. All these experiments are contained in the heteroscedastic notebooks. Note that results depend heavily on hyperparameter selection. Plots below show log-likelihoods and RMSEs on the train (semi-transparent colour) and test (solid colour). Circles and error bars correspond to the 10-fold cross validation mean and standard deviations respectively.

MNIST Classification

W is marginalised with 100 samples of the weights for all models except MAP, where only one set of weights is used.

MNIST Test MAP MAP Ensemble BBP Gaussian BBP GMM BBP Laplace BBP Local Reparam MC Dropout SGLD pSGLD
Log Like -572.9 -496.54 -1100.29 -1008.28 -892.85 -1086.43 -435.458 -828.29 -661.25
Error % 1.58 1.53 2.60 2.38 2.28 2.61 1.37 1.76 1.76

MNIST test results for methods under consideration. Estensive hyperparameter tunning has not been performed. We approximate the posterior predictive distribution with 100 MC samples. We use a FC network with two 1200 unit ReLU layers. If unspecified, the prior is Gaussian with std=0.1. P-SGLD uses RMSprop preconditioning.

The original paper for Bayes By Backprop reports around 1% error on MNIST. We find that this result is attainable only if approximate posterior variances are initialised to be very small (BBP Gauss 2). In this scenario, the distributions over weights resemble deltas, giving good predictive performance but bad uncertainty estimates. However, when initialising the variances to match the prior (BBP Gauss 1), we obtain the above results. The training curves for both of these hyperparameter configuration schemes are shown below:

MNIST Uncertainty

Total, aleatoric and epistemic uncertainties obtained when creating OOD samples by augmenting the MNIST test set with rotations:

Total and epistemic uncertainties obtained by testing our models, - which have been trained on MNIST -, on the KMNIST dataset:

Adversarial robustness

Total, aleatoric and epistemic uncertainties obtained when feeding our models with adversarial samples (fgsm).

Weight Distributions

Histograms of weights sampled from each model trained on MNIST. We draw 10 samples of w for each model.

Weight Pruning

#TODO

Owner
Machine Learning PhD student at University of Cambridge. Telecommunications (EE/CS) engineer.
a morph transfer UGATIT for image translation.

Morph-UGATIT a morph transfer UGATIT for image translation. Introduction 中文技术文档 This is Pytorch implementation of UGATIT, paper "U-GAT-IT: Unsupervise

55 Nov 14, 2022
CDGAN: Cyclic Discriminative Generative Adversarial Networks for Image-to-Image Transformation

CDGAN CDGAN: Cyclic Discriminative Generative Adversarial Networks for Image-to-Image Transformation CDGAN Implementation in PyTorch This is the imple

Kancharagunta Kishan Babu 6 Apr 19, 2022
Revisiting Global Statistics Aggregation for Improving Image Restoration

Revisiting Global Statistics Aggregation for Improving Image Restoration Xiaojie Chu, Liangyu Chen, Chengpeng Chen, Xin Lu Paper: https://arxiv.org/pd

MEGVII Research 128 Dec 24, 2022
Training Cifar-10 Classifier Using VGG16

opevcvdl-hw3 This project uses pytorch and Qt to achieve the requirements. Version Python 3.6 opencv-contrib-python 3.4.2.17 Matplotlib 3.1.1 pyqt5 5.

Kenny Cheng 3 Aug 17, 2022
PyTorch implementation for the paper Visual Representation Learning with Self-Supervised Attention for Low-Label High-Data Regime

Visual Representation Learning with Self-Supervised Attention for Low-Label High-Data Regime Created by Prarthana Bhattacharyya. Disclaimer: This is n

Prarthana Bhattacharyya 5 Nov 08, 2022
McGill Physics Hackathon 2021: Reaction-Diffusion Models for the Generation of Biological Patterns

DiffuseAnimals: Reaction-Diffusion Models for the Generation of Biological Patterns Introduction Reaction-diffusion equations can be utilized in order

Austin Szuminsky 2 Mar 07, 2022
Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment"

DSN-IQA Source code for paper "Deep Superpixel-based Network for Blind Image Quality Assessment" Requirements Python =3.8.0 Pytorch =1.7.1 Usage wit

7 Oct 13, 2022
Implementation for the paper: Invertible Denoising Network: A Light Solution for Real Noise Removal (CVPR2021).

Invertible Image Denoising This is the PyTorch implementation of paper: Invertible Denoising Network: A Light Solution for Real Noise Removal (CVPR 20

157 Dec 25, 2022
Official codebase used to develop Vision Transformer, MLP-Mixer, LiT and more.

Big Vision This codebase is designed for training large-scale vision models on Cloud TPU VMs. It is based on Jax/Flax libraries, and uses tf.data and

Google Research 701 Jan 03, 2023
A Bayesian cognition approach for belief updating of correlation judgement through uncertainty visualizations

Overview Code and supplemental materials for Karduni et al., 2020 IEEE Vis. "A Bayesian cognition approach for belief updating of correlation judgemen

Ryan Wesslen 1 Feb 08, 2022
Yoloxkeypointsegment - An anchor-free version of YOLO, with a simpler design but better performance

Introduction 关键点版本:已完成 全景分割版本:已完成 实例分割版本:已完成 YOLOX is an anchor-free version of

23 Oct 20, 2022
Object classification with basic computer vision techniques

naive-image-classification Object classification with basic computer vision techniques. Final assignment for the computer vision course I took at univ

2 Jul 01, 2022
ECLARE: Extreme Classification with Label Graph Correlations

ECLARE ECLARE: Extreme Classification with Label Graph Correlations @InProceedings{Mittal21b, author = "Mittal, A. and Sachdeva, N. and Agrawal

Extreme Classification 35 Nov 06, 2022
Face Library is an open source package for accurate and real-time face detection and recognition

Face Library Face Library is an open source package for accurate and real-time face detection and recognition. The package is built over OpenCV and us

52 Nov 09, 2022
Code for paper "A Critical Assessment of State-of-the-Art in Entity Alignment" (https://arxiv.org/abs/2010.16314)

A Critical Assessment of State-of-the-Art in Entity Alignment This repository contains the source code for the paper A Critical Assessment of State-of

Max Berrendorf 16 Oct 14, 2022
Multi-Stage Spatial-Temporal Convolutional Neural Network (MS-GCN)

Multi-Stage Spatial-Temporal Convolutional Neural Network (MS-GCN) This code implements the skeleton-based action segmentation MS-GCN model from Autom

Benjamin Filtjens 8 Nov 29, 2022
Annotated, understandable, and visually interpretable PyTorch implementations of: VAE, BIRVAE, NSGAN, MMGAN, WGAN, WGANGP, LSGAN, DRAGAN, BEGAN, RaGAN, InfoGAN, fGAN, FisherGAN

Overview PyTorch 0.4.1 | Python 3.6.5 Annotated implementations with comparative introductions for minimax, non-saturating, wasserstein, wasserstein g

Shayne O'Brien 471 Dec 16, 2022
AI Toolkit for Healthcare Imaging

Medical Open Network for AI MONAI is a PyTorch-based, open-source framework for deep learning in healthcare imaging, part of PyTorch Ecosystem. Its am

Project MONAI 3.7k Jan 07, 2023
DeepSTD: Mining Spatio-temporal Disturbances of Multiple Context Factors for Citywide Traffic Flow Prediction

DeepSTD: Mining Spatio-temporal Disturbances of Multiple Context Factors for Citywide Traffic Flow Prediction This is the implementation of DeepSTD in

5 Sep 26, 2022
Metric learning algorithms in Python

metric-learn: Metric Learning in Python metric-learn contains efficient Python implementations of several popular supervised and weakly-supervised met

1.3k Dec 28, 2022