Modular Probabilistic Programming on MXNet

Overview

MXFusion

Build Status | codecov | pypi | Documentation Status | GitHub license

MXFusion

Tutorials | Documentation | Contribution Guide

MXFusion is a modular deep probabilistic programming library.

With MXFusion Modules you can use state-of-the-art inference techniques for specialized probabilistic models without needing to implement those techniques yourself. MXFusion helps you rapidly build and test new methods at scale, by focusing on the modularity of probabilistic models and their integration with modern deep learning techniques.

MXFusion uses MXNet as its computational platform to bring the power of distributed, heterogenous computation to probabilistic modeling.

Installation

Dependencies / Prerequisites

MXFusion's primary dependencies are MXNet >= 1.3 and Networkx >= 2.1. See requirements.

Supported Architectures / Versions

MXFusion is tested on Python 3.4+ on MacOS and Linux.

Installation of MXNet

There are multiple PyPi packages of MXNet. A straight-forward installation with only CPU support can be done by:

pip install mxnet

For an installation with GPU or MKL, detailed instructions can be found on MXNet site.

pip

If you just want to use MXFusion and not modify the source, you can install through pip:

pip install mxfusion

From source

To install MXFusion from source, after cloning the repository run the following from the top-level directory:

pip install .

Where to go from here?

Tutorials

Documentation

Contributions

Community

We welcome your contributions and questions and are working to build a responsive community around MXFusion. Feel free to file an Github issue if you find a bug or want to request a new feature.

Contributing

Have a look at our contributing guide, thanks for the interest!

Points of contact for MXFusion are:

  • Eric Meissner (@meissnereric)
  • Zhenwen Dai (@zhenwendai)

License

MXFusion is licensed under Apache 2.0. See LICENSE.

Comments
  • LBFGS optimizer

    LBFGS optimizer

    Issue #, if available: #75

    Description of changes: This is a draft pull request to add LBFGS optimizer for optimization of deterministic loss functions. I found this works much better than the current default optimizer for vanilla GPs.

    If you don't object to the approach I use here I will take the time and add tests and tidy the code up.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by marpulli 5
  • Copy over module algorithms correctly

    Copy over module algorithms correctly

    The module algorithms dictionary is expected to be a list containing tuples but after cloning a module this is not the case. This PR fixes that.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by marpulli 5
  • Changed transpose property to function.

    Changed transpose property to function.

    This stops the debugger accidentally creating new variables on access.

    Description of changes: When using an interactive debugger (e.g. PyCharm), inspecting any Variable object invokes the property T which has the side effect of creating new variables that are not part of the graph. By changing this to a function .transpose() the functionality is maintained (at the slight expense of ease of use) whilst preventing this from happening.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by tdiethe 5
  • GP replicate_self fixes

    GP replicate_self fixes

    This fixes #183 (but maybe you want to copy kernels in a different way as discussed yesterday). It also fixes a few other issues with cloning this specific GP module and adds tests

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by marpulli 4
  • DGP implementation

    DGP implementation

    This PR adds an implementation of DGPs.

    I have added a marginalise_conditional_distribution method to the conditional GP class which is used both by the deep GP and the SVI GP.

    I chose not to explicitly represent the conditional p(f|u) as I couldn't work out how to represent it while making the marginalization as efficient.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by marpulli 4
  • Move constants in InferenceParameters into ParameterDict

    Move constants in InferenceParameters into ParameterDict

    Description of changes: Shift MXNet based constants from being stored in the InferenceParameters._constants dictionary (which still exists and keeps scalar/native constants) to keeping MXNet constants in the ParameterDict where other Parameters are kept.

    This removes the need for the separate MXNet constants file at serialization file as well!

    (Feel free to ignore the documentation change here, it's a remnant of the larger docs change I made in another PR and I'll try to make sure the other one is the one that gets kept.)

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    enhancement 
    opened by meissnereric 4
  • Implement common MXNet operators (dot product, diag, etc. in MXFusion

    Implement common MXNet operators (dot product, diag, etc. in MXFusion

    These should be implemented as stateless functions.

    Create a class that takes in variables and returns a function evaluation.

    Basic - Add, subtract, multiplication, division of variables.

    elementwise - square, exponentiation, log

    Aggregation - sum, mean, prod

    Matrix ops - dot product, diag

    Matrix manipulation - reshape, transpose

    enhancement 
    opened by meissnereric 4
  • Folder structure changes and renaming

    Folder structure changes and renaming

    It contains the following changes:

    • Folder structure changes and renaming according to our discussion.
    • Extend the Mark's multivariate normal distribution implementation to allow general broadcasting.

    @marpulli I changed the behavior of your log_pdf and kl_divergence implementation to return an array of the size of mean without the last dimension. This is consistent with our other distributions.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by zhenwendai 3
  • The monthly cron build of Travis-CI on master fails.

    The monthly cron build of Travis-CI on master fails.

    The monthly cron build of Travis-CI on master branch fails, because it tries to deploy again. Here is the link to a build: https://travis-ci.org/amzn/MXFusion/jobs/453598240.

    bug 
    opened by zhenwendai 3
  • Uniform and Laplace distributions

    Uniform and Laplace distributions

    Issue #, if available:

    Description of changes:

    Uniform and Laplace distributions + tests

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    enhancement 
    opened by tdiethe 3
  • Added Bernoulli distribution + tests

    Added Bernoulli distribution + tests

    Issue #, if available: #21

    Description of changes: Added Bernoulli distribution + tests

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    enhancement 
    opened by tdiethe 3
  • Switch to deep numpy as backend.

    Switch to deep numpy as backend.

    Hi,

    I'm a member of Deep Numpy (a new MXNet frontend API with numpy like interface: https://numpy.mxnet.io/index.html) dev team and I'm currently working on the random module for Deep Numpy, which, as its name, has behavior exactly like Numpy, including broadcastable parameters and output shape, for example: https://github.com/apache/incubator-mxnet/pull/15858

    Also, sampling backend for rejection sampling is entirely rewritten to remove while loop from GPU Kernel. The new version, according to my profiling, performs ten times faster than nd.random on GPU at the cost of tiny increase in memory usage. (not merged yet) https://github.com/apache/incubator-mxnet/issues/15928

    We could have some further discussion if you have interests in switching to deep numpy's random sampling backend, or, migrating MXFusion to Deep Numpy API :-)

    opened by xidulu 0
  • GP Likelihood classes

    GP Likelihood classes

    Is your feature request related to a problem? Please describe. The SVGP and DGP modules should be able to work with alternative likelihoods. They are both currently implemented for Gaussian likelihoods only. The ability to switch out likelihoods would make things like #126 much simpler with less code duplication.

    Describe the solution you'd like Currently:

    m = Model()
    m.X = Variable()
    m.noise_var = Variable(transformation=PositiveTransformation())
    m.Y = SVGP.define_variable(m.X, kern, m.noise_var...)
    

    I'd like something more like

    m = Model()
    m.X = Variable()
    m.noise_var = Variable(transformation=PositiveTransformation())
    m.likelihood = NormalGPLikelihood(variance)
    # This could also be: 
    # m.likelihood = BernoulliGPLikelihood()
    m.Y = SVGP.define_variable(m.X, kern, m.likelihood...)
    

    These likelihoods would need to be able to compute: where p(f) is Gaussian.

    class BernoulliGPLikelihood(GPLikelihood):
        # This class would also contain the transformation of the 
        # latent function to the interval [0, 1] I think
       def expectation(self, mean, variance, data): 
           # This method would implement the quadrature rule to do this here
    
    class NormalGPLikelihood(GPLikelihood):
       def expectation(self, mean, variance, data): # think of a better name!
           # This method could use the analytically equation
    

    Describe alternatives you've considered Creating a new class might not be necessary, we might be able to reuse the current distribution objects directly.

    I would be happy to implement this, if we come up with a design which people are happy with first.

    opened by marpulli 0
  • Inference.run throws when passing Variables of type PARAMETER as 'constants' argument

    Inference.run throws when passing Variables of type PARAMETER as 'constants' argument

    Describe the bug

    When passing Variables of type PARAMETER as 'constants' argument to the constructor of Inference (or child classes), this line throws during the execution of Inference's run method since the Variables that were passed as 'constants' are in self._var_trans but not in the 'kw' input argument.

    Expected behavior

    Execution does not throw and Variables passed as 'constants' argument to the Inference constructor, even if they are of type PARAMETER, are treated as constants and not optimized in training.

    Desktop:

    • OS: Ubuntu 18.04.2
    • Python version: 3.6
    • MXNet version: 1.4.1
    • MXFusion version: 0.3.1
    • MXNet context: CPU
    bug 
    opened by pabfer 2
  • Execution throws if a Kernel Variable is set to CONSTANT

    Execution throws if a Kernel Variable is set to CONSTANT

    Describe the bug

    If a kernel Variable is of CONSTANT type, fetch_parameters throws here since CONSTANT Variables do not make it to params input argument.

    Expected behavior

    Execution does not throw, and kernel Variables that are of type CONSTANT are kept constant and not optimized in training.

    Desktop:

    • OS: Ubuntu 18.04.2
    • Python version: 3.6
    • MXNet version: 1.4.1
    • MXFusion version: 0.3.1
    • MXNet context: CPU
    bug 
    opened by pabfer 1
  • Unexplicit throw in MinibatchInferenceLoop if batch_size is larger than dataset size

    Unexplicit throw in MinibatchInferenceLoop if batch_size is larger than dataset size

    Describe the bug

    If batch_size for MinibatchInferenceLoop is larger than the dataset size, this line throws due to division by zero.

    Expected behavior

    Possible mitigations are:

    • Option No. 1: Default batch size to min("requested batch size", "dataset size") and issue a warning if "requested batch size" > "dataset size".
    • Option No. 2: Throw with a message stating why it's throwing and how to fix the issue.

    Desktop:

    • OS: Ubuntu 18.04.2
    • Python version: 3.6
    • MXNet version: 1.4.1
    • MXFusion version: 0.3.1
    • MXNet context: CPU
    bug Easy 
    opened by pabfer 1
Releases(v0.3.1)
  • v0.3.1(May 30, 2019)

    • Added SVGP regression notebook.
    • Updated the VAE and GP regression notebook.
    • Removed the dependency on scikit-learn.
    • Moved the parameters of Gluon block to be controlled by MXFusion.
    • Fixed a bug in mini-batch learning.
    • Extended the SVGPRegression module to take samples as input variables.
    • Documentation and stylistic edits.
    • Merged in the PILCO Changes
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Feb 20, 2019)

    The bigger changes are:

    • #133 Changing variable broadcasting for factors
    • #153 Simplify serialization to use a simple zip file

    Other than that its mostly bug fixes and documentation changes.

    • #144 Add details to inference and serialization documentation
    • #148 Fix a bug for SVGP regression with minibatch traning
    • #81 Reduce num_samples in uniform non-mock test to 1000
    • #137 Changed transpose property to function.
    • #135 Bug fix in function evaluation
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Nov 9, 2018)

    • Add the tutorial for Gaussian process regression.
    • Fix empty operator bug.
    • Fix bug to allow the same variable for multiple inputs to a factor.
    • Add module serialization.
    • Fix the bug that causes the failure of documentation compilation.
    • Fix the bug: the inference methods of GP modules do not handle samples.
    • Update issue templates.
    • Add license headers to all files.
    • Add the getting started notebook.
    • Remove the dependency on future.
    • Update the Inference documentation.
    • Implement Dirichlet distribution.
    • Add logistic variable transformation.
    • Implement Expectation inference.
    • Fix the bugs related to dtype.
    • Validate shape of array variables.
    • Fix divide by zero error if max_iter < n_prints.
    • Add multiply kernel.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Oct 19, 2018)

    • Improve the printing messages of gradient loop and bug fix for GluonFunctionEvaluation
    • Add Gamma Distribution
    • GP Modules enhancement
    • Implement Module, GPRegression, SparseGPRegression and SVGPRegression…
    • Update the README
    • Add score function for variational inference enhancement
    • Refactor the interface of inferece for module
    • Add score function inference
    • Implement common MXNet operators (dot product, diag, etc. in MXFusion enhancement
    • Add support for MXNet operators.
    • Cleanup kernel function and function wrappers for copying enhancement
    • Implement the base class MXFusionFunction
    Source code(tar.gz)
    Source code(zip)
Owner
Amazon
Amazon
IEEE-CIS Technical Challenge on Predict+Optimize for Renewable Energy Scheduling

IEEE-CIS Technical Challenge on Predict+Optimize for Renewable Energy Scheduling This is my code, data and approach for the IEEE-CIS Technical Challen

3 Sep 18, 2022
Real-time Object Detection for Streaming Perception, CVPR 2022

StreamYOLO Real-time Object Detection for Streaming Perception Jinrong Yang, Songtao Liu, Zeming Li, Xiaoping Li, Sun Jian Real-time Object Detection

Jinrong Yang 237 Dec 27, 2022
Code implementation for the paper 'Conditional Gaussian PAC-Bayes'.

CondGauss This repository contains PyTorch code for the paper Stochastic Gaussian PAC-Bayes. A novel PAC-Bayesian training method is implemented. Ther

0 Nov 01, 2021
Pytorch Performace Tuning, WandB, AMP, Multi-GPU, TensorRT, Triton

Plant Pathology 2020 FGVC7 Introduction A deep learning model pipeline for training, experimentaiton and deployment for the Kaggle Competition, Plant

Bharat Giddwani 0 Feb 25, 2022
Official PyTorch implementation of the paper: DeepSIM: Image Shape Manipulation from a Single Augmented Training Sample

DeepSIM: Image Shape Manipulation from a Single Augmented Training Sample (ICCV 2021 Oral) Project | Paper Official PyTorch implementation of the pape

Eliahu Horwitz 393 Dec 22, 2022
Awesome Long-Tailed Learning

Awesome Long-Tailed Learning This repo pays specially attention to the long-tailed distribution, where labels follow a long-tailed or power-law distri

Stomach_ache 284 Jan 06, 2023
Automated Attendance Project Using Face Recognition

dependencies for project: cmake 3.22.1 dlib 19.22.1 face-recognition 1.3.0 openc

Rohail Taha 1 Jan 09, 2022
Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Source Code

Learning from Guided Play: A Scheduled Hierarchical Approach for Improving Exploration in Adversarial Imitation Learning Trevor Ablett*, Bryan Chan*,

STARS Laboratory 8 Sep 14, 2022
Replication of Pix2Seq with Pretrained Model

Pretrained-Pix2Seq We provide the pre-trained model of Pix2Seq. This version contains new data augmentation. The model is trained for 300 epochs and c

peng gao 51 Nov 22, 2022
optimization routines for hyperparameter tuning

Hyperopt: Distributed Hyperparameter Optimization Hyperopt is a Python library for serial and parallel optimization over awkward search spaces, which

Marc Claesen 398 Nov 09, 2022
Auditing Black-Box Prediction Models for Data Minimization Compliance

Data-Minimization-Auditor An auditing tool for model-instability based data minimization that is introduced in "Auditing Black-Box Prediction Models f

Bashir Rastegarpanah 2 Mar 24, 2022
Pytorch implementation of COIN, a framework for compression with implicit neural representations 🌸

COIN 🌟 This repo contains a Pytorch implementation of COIN: COmpression with Implicit Neural representations, including code to reproduce all experim

Emilien Dupont 104 Dec 14, 2022
MBPO (paper: When to trust your model: Model-based policy optimization) in offline RL settings

offline-MBPO This repository contains the code of a version of model-based RL algorithm MBPO, which is modified to perform in offline RL settings Pape

LxzGordon 1 Oct 24, 2021
Employs neural networks to classify images into four categories: ship, automobile, dog or frog

Neural Net Image Classifier Employs neural networks to classify images into four categories: ship, automobile, dog or frog Viterbi_1.py uses a classic

Riley Baker 1 Jan 18, 2022
A smaller subset of 10 easily classified classes from Imagenet, and a little more French

Imagenette 🎶 Imagenette, gentille imagenette, Imagenette, je te plumerai. 🎶 (Imagenette theme song thanks to Samuel Finlayson) NB: Versions of Image

fast.ai 718 Jan 01, 2023
Keras Image Embeddings using Contrastive Loss

Image to Embedding projection in vector space. Implementation in keras and tensorflow of batch all triplet loss for one-shot/few-shot learning.

Shravan Anand K 5 Mar 21, 2022
Unofficial implementation of Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segmentation

Point-Unet This is an unofficial implementation of the MICCAI 2021 paper Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segment

Namt0d 9 Dec 07, 2022
Code release for NeRF (Neural Radiance Fields)

NeRF: Neural Radiance Fields Project Page | Video | Paper | Data Tensorflow implementation of optimizing a neural representation for a single scene an

6.5k Jan 01, 2023
A short code in python, Enchpyter, is able to encrypt and decrypt words as you determine, of course

Enchpyter Enchpyter is a program do encrypt and decrypt any word you want (just letters). You enter how many letters jumps and write the word, so, the

João Assalim 2 Oct 10, 2022
[ICML 2021, Long Talk] Delving into Deep Imbalanced Regression

Delving into Deep Imbalanced Regression This repository contains the implementation code for paper: Delving into Deep Imbalanced Regression Yuzhe Yang

Yuzhe Yang 568 Dec 30, 2022