Pytorch implementations of Bayes By Backprop, MC Dropout, SGLD, the Local Reparametrization Trick, KF-Laplace, SG-HMC and more

Overview

Bayesian Neural Networks

License: MIT Python 2.7+ Pytorch 1.0

Pytorch implementations for the following approximate inference methods:

We also provide code for:

Prerequisites

  • PyTorch
  • Numpy
  • Matplotlib

The project is written in python 2.7 and Pytorch 1.0.1. If CUDA is available, it will be used automatically. The models can also run on CPU as they are not excessively big.

Usage

Structure

Regression experiments

We carried out homoscedastic and heteroscedastic regression experiements on toy datasets, generated with (Gaussian Process ground truth), as well as on real data (six UCI datasets).

Notebooks/classification/(ModelName)_(ExperimentType).ipynb: Contains experiments using (ModelName) on (ExperimentType), i.e. homoscedastic/heteroscedastic. The heteroscedastic notebooks contain both toy and UCI dataset experiments for a given (ModelName).

We also provide Google Colab notebooks. This means that you can run on a GPU (for free!). No modifications required - all dependencies and datasets are added from within the notebooks - except for selecting Runtime -> Change runtime type -> Hardware accelerator -> GPU.

MNIST classification experiments

train_(ModelName)_(Dataset).py: Trains (ModelName) on (Dataset). Training metrics and model weights will be saved to the specified directories.

src/: General utilities and model definitions.

Notebooks/classification: An asortment of notebooks which allow for model training, evaluation and running of digit rotation uncertainty experiments. They also allow for weight distribution plotting and weight pruning. They allow for loading of pre-trained models for experimentation.

Bayes by Backprop (BBP)

(https://arxiv.org/abs/1505.05424)

Colab notebooks with regression models: BBP homoscedastic / heteroscedastic

Train a model on MNIST:

python train_BayesByBackprop_MNIST.py [--model [MODEL]] [--prior_sig [PRIOR_SIG]] [--epochs [EPOCHS]] [--lr [LR]] [--n_samples [N_SAMPLES]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

For an explanation of the script's arguments:

python train_BayesByBackprop_MNIST.py -h

Best results are obtained with a Laplace prior.

Local Reparametrisation Trick

(https://arxiv.org/abs/1506.02557)

Bayes By Backprop inference where the mean and variance of activations are calculated in closed form. Activations are sampled instead of weights. This makes the variance of the Monte Carlo ELBO estimator scale as 1/M, where M is the minibatch size. Sampling weights scales (M-1)/M. The KL divergence between gaussians can also be computed in closed form, further reducing variance. Computation of each epoch is faster and so is convergence.

Train a model on MNIST:

python train_BayesByBackprop_MNIST.py --model Local_Reparam [--prior_sig [PRIOR_SIG]] [--epochs [EPOCHS]] [--lr [LR]] [--n_samples [N_SAMPLES]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

MC Dropout

(https://arxiv.org/abs/1506.02142)

A fixed dropout rate of 0.5 is set.

Colab notebooks with regression models: MC Dropout homoscedastic heteroscedastic

Train a model on MNIST:

python train_MCDropout_MNIST.py [--weight_decay [WEIGHT_DECAY]] [--epochs [EPOCHS]] [--lr [LR]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

For an explanation of the script's arguments:

python train_MCDropout_MNIST.py -h

Stochastic Gradient Langevin Dynamics (SGLD)

(https://www.ics.uci.edu/~welling/publications/papers/stoclangevin_v6.pdf)

In order to converge to the true posterior over w, the learning rate should be annealed according to the Robbins-Monro conditions. In practise, we use a fixed learning rate.

Colab notebooks with regression models: SGLD homoscedastic / heteroscedastic

Train a model on MNIST:

python train_SGLD_MNIST.py [--use_preconditioning [USE_PRECONDITIONING]] [--prior_sig [PRIOR_SIG]] [--epochs [EPOCHS]] [--lr [LR]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

For an explanation of the script's arguments:

python train_SGLD_MNIST.py -h

pSGLD

(https://arxiv.org/abs/1512.07666)

SGLD with RMSprop preconditioning. A higher learning rate should be used than for vanilla SGLD.

Train a model on MNIST:

python train_SGLD_MNIST.py --use_preconditioning True [--prior_sig [PRIOR_SIG]] [--epochs [EPOCHS]] [--lr [LR]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

Bootstrap MAP Ensemble

Multiple networks are trained on subsamples of the dataset.

Colab notebooks with regression models: MAP Ensemble homoscedastic / heteroscedastic

Train an ensemble on MNIST:

python train_Bootrap_Ensemble_MNIST.py [--weight_decay [WEIGHT_DECAY]] [--subsample [SUBSAMPLE]] [--n_nets [N_NETS]] [--epochs [EPOCHS]] [--lr [LR]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

For an explanation of the script's arguments:

python train_Bootrap_Ensemble_MNIST.py -h

Kronecker-Factorised Laplace

(https://openreview.net/pdf?id=Skdvd2xAZ)

Train a MAP network and then calculate a second order taylor series aproxiamtion to the curvature around a mode of the posterior. A block diagonal Hessian approximation is used, where only intra-layer dependencies are accounted for. The Hessian is further approximated as the kronecker product of the expectation of a single datapoint's Hessian factors. Approximating the Hessian can take a while. Fortunately it only needs to be done once.

Train a MAP network on MNIST and approximate Hessian:

python train_KFLaplace_MNIST.py [--weight_decay [WEIGHT_DECAY]] [--hessian_diag_sig [HESSIAN_DIAG_SIG]] [--epochs [EPOCHS]] [--lr [LR]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

For an explanation of the script's arguments:

python train_KFLaplace_MNIST.py -h

Note that we save the unscaled and uninverted Hessian factors. This will allow for computationally cheap changes to the prior at inference time as the Hessian will not need to be re-computed. Inference will require inverting the approximated Hessian factors and sampling from a matrix normal distribution. This is shown in notebooks/KFAC_Laplace_MNIST.ipynb

Stochastic Gradient Hamiltonian Monte Carlo

(https://arxiv.org/abs/1402.4102)

We implement the scale-adapted version of this algorithm, proposed here to find hyperparameters automatically during burn-in. We place a Gaussian prior over network weights and a Gamma hyperprior over the Gaussian's precision.

Run SG-HMC-SA burn in and sampler, saving weights in specified file.

python train_SGHMC_MNIST.py [--epochs [EPOCHS]] [--sample_freq [SAMPLE_FREQ]] [--burn_in [BURN_IN]] [--lr [LR]] [--models_dir [MODELS_DIR]] [--results_dir [RESULTS_DIR]]

For an explanation of the script's arguments:

python train_SGHMC_MNIST.py -h

Approximate Inference in Neural Networks

Map inference provides a point estimate of parameter values. When provided with out of distribution inputs, such as rotated digits, these models then to make wrong predictions with high confidence.

Uncertainty Decomposition

We can measure uncertainty in our models' predictions through predictive entropy. We can decompose this term in order to distinguish between 2 types of uncertainty. Uncertainty caused by noise in the data, or Aleatoric uncertainty, can be quantified as the expected entropy of model predictions. Model uncertainty or Epistemic uncertainty can be measured as the difference between total entropy and aleatoric entropy.

Results

Homoscedastic Regression

Toy homoscedastic regression task. Data is generated by a GP with a RBF kernel (l = 1, σn = 0.3). We use a single-output FC network with one hidden layer of 200 ReLU units to predict the regression mean μ(x). A fixed log σ is learnt separately.

Heteroscedastic Regression

Same scenario as previous section but log σ(x) is predicted from the input.

Toy heteroscedastic regression task. Data is generated by a GP with a RBF kernel (l = 1 σn = 0.3 · |x + 2|). We use a two-head network with 200 ReLU units to predict the regression mean μ(x) and log-standard deviation log σ(x).

Regression on UCI datasets

We performed heteroscedastic regression on the six UCI datasets (housing, concrete, energy efficiency, power plant, red wine and yacht datasets), using 10-foild cross validation. All these experiments are contained in the heteroscedastic notebooks. Note that results depend heavily on hyperparameter selection. Plots below show log-likelihoods and RMSEs on the train (semi-transparent colour) and test (solid colour). Circles and error bars correspond to the 10-fold cross validation mean and standard deviations respectively.

MNIST Classification

W is marginalised with 100 samples of the weights for all models except MAP, where only one set of weights is used.

MNIST Test MAP MAP Ensemble BBP Gaussian BBP GMM BBP Laplace BBP Local Reparam MC Dropout SGLD pSGLD
Log Like -572.9 -496.54 -1100.29 -1008.28 -892.85 -1086.43 -435.458 -828.29 -661.25
Error % 1.58 1.53 2.60 2.38 2.28 2.61 1.37 1.76 1.76

MNIST test results for methods under consideration. Estensive hyperparameter tunning has not been performed. We approximate the posterior predictive distribution with 100 MC samples. We use a FC network with two 1200 unit ReLU layers. If unspecified, the prior is Gaussian with std=0.1. P-SGLD uses RMSprop preconditioning.

The original paper for Bayes By Backprop reports around 1% error on MNIST. We find that this result is attainable only if approximate posterior variances are initialised to be very small (BBP Gauss 2). In this scenario, the distributions over weights resemble deltas, giving good predictive performance but bad uncertainty estimates. However, when initialising the variances to match the prior (BBP Gauss 1), we obtain the above results. The training curves for both of these hyperparameter configuration schemes are shown below:

MNIST Uncertainty

Total, aleatoric and epistemic uncertainties obtained when creating OOD samples by augmenting the MNIST test set with rotations:

Total and epistemic uncertainties obtained by testing our models, - which have been trained on MNIST -, on the KMNIST dataset:

Adversarial robustness

Total, aleatoric and epistemic uncertainties obtained when feeding our models with adversarial samples (fgsm).

Weight Distributions

Histograms of weights sampled from each model trained on MNIST. We draw 10 samples of w for each model.

Weight Pruning

#TODO

Owner
Machine Learning PhD student at University of Cambridge. Telecommunications (EE/CS) engineer.
Lab Materials for MIT 6.S191: Introduction to Deep Learning

This repository contains all of the code and software labs for MIT 6.S191: Introduction to Deep Learning! All lecture slides and videos are available

Alexander Amini 5.6k Dec 26, 2022
CVPR 2021 - Official code repository for the paper: On Self-Contact and Human Pose.

SMPLify-XMC This repo is part of our project: On Self-Contact and Human Pose. [Project Page] [Paper] [MPI Project Page] License Software Copyright Lic

Lea Müller 83 Dec 14, 2022
Controlling Hill Climb Racing with Hand Tacking

Controlling Hill Climb Racing with Hand Tacking Opened Palm for Gas Closed Palm for Brake

Rohit Ingole 3 Jan 18, 2022
Styled text-to-drawing synthesis method. Featured at the 2021 NeurIPS Workshop on Machine Learning for Creativity and Design

Styled text-to-drawing synthesis method. Featured at the 2021 NeurIPS Workshop on Machine Learning for Creativity and Design

Peter Schaldenbrand 247 Dec 23, 2022
True Few-Shot Learning with Language Models

This codebase supports using language models (LMs) for true few-shot learning: learning to perform a task using a limited number of examples from a single task distribution.

Ethan Perez 124 Jan 04, 2023
An unofficial implementation of "Unpaired Image Super-Resolution using Pseudo-Supervision." CVPR2020

UnpairedSR An unofficial implementation of "Unpaired Image Super-Resolution using Pseudo-Supervision." CVPR2020 turn RCAN(modified) -- xmodel(xilinx

JiaKui Hu 10 Oct 28, 2022
Super-BPD: Super Boundary-to-Pixel Direction for Fast Image Segmentation (CVPR 2020)

Super-BPD for Fast Image Segmentation (CVPR 2020) Introduction We propose direction-based super-BPD, an alternative to superpixel, for fast generic im

189 Dec 07, 2022
Source code for "UniRE: A Unified Label Space for Entity Relation Extraction.", ACL2021.

UniRE Source code for "UniRE: A Unified Label Space for Entity Relation Extraction.", ACL2021. Requirements python: 3.7.6 pytorch: 1.8.1 transformers:

Wang Yijun 109 Nov 29, 2022
Introducing neural networks to predict stock prices

IntroNeuralNetworks in Python: A Template Project IntroNeuralNetworks is a project that introduces neural networks and illustrates an example of how o

Vivek Palaniappan 637 Jan 04, 2023
AnimationKit: AI Upscaling & Interpolation using Real-ESRGAN+RIFE

ALPHA 2.5: Frostbite Revival (Released 12/23/21) Changelog: [ UI ] Chained design. All steps link to one another! Use the master override toggles to s

87 Nov 16, 2022
Code for the paper Task Agnostic Morphology Evolution.

Task-Agnostic Morphology Optimization This repository contains code for the paper Task-Agnostic Morphology Evolution by Donald (Joey) Hejna, Pieter Ab

Joey Hejna 18 Aug 04, 2022
A Pytorch Implementation of Domain adaptation of object detector using scissor-like networks

A Pytorch Implementation of Domain adaptation of object detector using scissor-like networks Please follow Faster R-CNN and DAF to complete the enviro

2 Oct 07, 2022
J.A.R.V.I.S is an AI virtual assistant made in python.

J.A.R.V.I.S is an AI virtual assistant made in python. Running JARVIS Without Python To run JARVIS without python: 1. Head over to our installation pa

somePythonProgrammer 16 Dec 29, 2022
HybVIO visual-inertial odometry and SLAM system

HybVIO A visual-inertial odometry system with an optional SLAM module. This is a research-oriented codebase, which has been published for the purposes

Spectacular AI 320 Jan 03, 2023
🔊 Audio and fastai v2

Fastaudio An audio module for fastai v2. We want to help you build audio machine learning applications while minimizing the need for audio domain expe

152 Dec 28, 2022
Predict the latency time of the deep learning models

Deep Neural Network Prediction Step 1. Genernate random parameters and Run them sequentially : $ python3 collect_data.py -gp -ep -pp -pl pooling -num

QAQ 1 Nov 12, 2021
Pytorch Implementation for Dilated Continuous Random Field

DilatedCRF Pytorch implementation for fully-learnable DilatedCRF. If you find my work helpful, please consider our paper: @article{Mo2022dilatedcrf,

DunnoCoding_Plus 3 Nov 13, 2022
DNA-RECON { Automatic Web Reconnaissance Tool }

ABOUT TOOL : DNA-RECON is an automatic web reconnaissance tool written in python. This tool made for reconnaissance and information gathering with an

NIKUNJ BHATT 25 Aug 11, 2021
COCO Style Dataset Generator GUI

A simple GUI-based COCO-style JSON Polygon masks' annotation tool to facilitate quick and efficient crowd-sourced generation of annotation masks and bounding boxes. Optionally, one could choose to us

Hans Krupakar 142 Dec 09, 2022
A scikit-learn-compatible module for estimating prediction intervals.

|Anaconda|_ MAPIE - Model Agnostic Prediction Interval Estimator MAPIE allows you to easily estimate prediction intervals using your favourite sklearn

SimAI 584 Dec 27, 2022