Modular Probabilistic Programming on MXNet

Overview

MXFusion

Build Status | codecov | pypi | Documentation Status | GitHub license

MXFusion

Tutorials | Documentation | Contribution Guide

MXFusion is a modular deep probabilistic programming library.

With MXFusion Modules you can use state-of-the-art inference techniques for specialized probabilistic models without needing to implement those techniques yourself. MXFusion helps you rapidly build and test new methods at scale, by focusing on the modularity of probabilistic models and their integration with modern deep learning techniques.

MXFusion uses MXNet as its computational platform to bring the power of distributed, heterogenous computation to probabilistic modeling.

Installation

Dependencies / Prerequisites

MXFusion's primary dependencies are MXNet >= 1.3 and Networkx >= 2.1. See requirements.

Supported Architectures / Versions

MXFusion is tested on Python 3.4+ on MacOS and Linux.

Installation of MXNet

There are multiple PyPi packages of MXNet. A straight-forward installation with only CPU support can be done by:

pip install mxnet

For an installation with GPU or MKL, detailed instructions can be found on MXNet site.

pip

If you just want to use MXFusion and not modify the source, you can install through pip:

pip install mxfusion

From source

To install MXFusion from source, after cloning the repository run the following from the top-level directory:

pip install .

Where to go from here?

Tutorials

Documentation

Contributions

Community

We welcome your contributions and questions and are working to build a responsive community around MXFusion. Feel free to file an Github issue if you find a bug or want to request a new feature.

Contributing

Have a look at our contributing guide, thanks for the interest!

Points of contact for MXFusion are:

  • Eric Meissner (@meissnereric)
  • Zhenwen Dai (@zhenwendai)

License

MXFusion is licensed under Apache 2.0. See LICENSE.

Comments
  • LBFGS optimizer

    LBFGS optimizer

    Issue #, if available: #75

    Description of changes: This is a draft pull request to add LBFGS optimizer for optimization of deterministic loss functions. I found this works much better than the current default optimizer for vanilla GPs.

    If you don't object to the approach I use here I will take the time and add tests and tidy the code up.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by marpulli 5
  • Copy over module algorithms correctly

    Copy over module algorithms correctly

    The module algorithms dictionary is expected to be a list containing tuples but after cloning a module this is not the case. This PR fixes that.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by marpulli 5
  • Changed transpose property to function.

    Changed transpose property to function.

    This stops the debugger accidentally creating new variables on access.

    Description of changes: When using an interactive debugger (e.g. PyCharm), inspecting any Variable object invokes the property T which has the side effect of creating new variables that are not part of the graph. By changing this to a function .transpose() the functionality is maintained (at the slight expense of ease of use) whilst preventing this from happening.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by tdiethe 5
  • GP replicate_self fixes

    GP replicate_self fixes

    This fixes #183 (but maybe you want to copy kernels in a different way as discussed yesterday). It also fixes a few other issues with cloning this specific GP module and adds tests

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by marpulli 4
  • DGP implementation

    DGP implementation

    This PR adds an implementation of DGPs.

    I have added a marginalise_conditional_distribution method to the conditional GP class which is used both by the deep GP and the SVI GP.

    I chose not to explicitly represent the conditional p(f|u) as I couldn't work out how to represent it while making the marginalization as efficient.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by marpulli 4
  • Move constants in InferenceParameters into ParameterDict

    Move constants in InferenceParameters into ParameterDict

    Description of changes: Shift MXNet based constants from being stored in the InferenceParameters._constants dictionary (which still exists and keeps scalar/native constants) to keeping MXNet constants in the ParameterDict where other Parameters are kept.

    This removes the need for the separate MXNet constants file at serialization file as well!

    (Feel free to ignore the documentation change here, it's a remnant of the larger docs change I made in another PR and I'll try to make sure the other one is the one that gets kept.)

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    enhancement 
    opened by meissnereric 4
  • Implement common MXNet operators (dot product, diag, etc. in MXFusion

    Implement common MXNet operators (dot product, diag, etc. in MXFusion

    These should be implemented as stateless functions.

    Create a class that takes in variables and returns a function evaluation.

    Basic - Add, subtract, multiplication, division of variables.

    elementwise - square, exponentiation, log

    Aggregation - sum, mean, prod

    Matrix ops - dot product, diag

    Matrix manipulation - reshape, transpose

    enhancement 
    opened by meissnereric 4
  • Folder structure changes and renaming

    Folder structure changes and renaming

    It contains the following changes:

    • Folder structure changes and renaming according to our discussion.
    • Extend the Mark's multivariate normal distribution implementation to allow general broadcasting.

    @marpulli I changed the behavior of your log_pdf and kl_divergence implementation to return an array of the size of mean without the last dimension. This is consistent with our other distributions.

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    opened by zhenwendai 3
  • The monthly cron build of Travis-CI on master fails.

    The monthly cron build of Travis-CI on master fails.

    The monthly cron build of Travis-CI on master branch fails, because it tries to deploy again. Here is the link to a build: https://travis-ci.org/amzn/MXFusion/jobs/453598240.

    bug 
    opened by zhenwendai 3
  • Uniform and Laplace distributions

    Uniform and Laplace distributions

    Issue #, if available:

    Description of changes:

    Uniform and Laplace distributions + tests

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    enhancement 
    opened by tdiethe 3
  • Added Bernoulli distribution + tests

    Added Bernoulli distribution + tests

    Issue #, if available: #21

    Description of changes: Added Bernoulli distribution + tests

    By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

    enhancement 
    opened by tdiethe 3
  • Switch to deep numpy as backend.

    Switch to deep numpy as backend.

    Hi,

    I'm a member of Deep Numpy (a new MXNet frontend API with numpy like interface: https://numpy.mxnet.io/index.html) dev team and I'm currently working on the random module for Deep Numpy, which, as its name, has behavior exactly like Numpy, including broadcastable parameters and output shape, for example: https://github.com/apache/incubator-mxnet/pull/15858

    Also, sampling backend for rejection sampling is entirely rewritten to remove while loop from GPU Kernel. The new version, according to my profiling, performs ten times faster than nd.random on GPU at the cost of tiny increase in memory usage. (not merged yet) https://github.com/apache/incubator-mxnet/issues/15928

    We could have some further discussion if you have interests in switching to deep numpy's random sampling backend, or, migrating MXFusion to Deep Numpy API :-)

    opened by xidulu 0
  • GP Likelihood classes

    GP Likelihood classes

    Is your feature request related to a problem? Please describe. The SVGP and DGP modules should be able to work with alternative likelihoods. They are both currently implemented for Gaussian likelihoods only. The ability to switch out likelihoods would make things like #126 much simpler with less code duplication.

    Describe the solution you'd like Currently:

    m = Model()
    m.X = Variable()
    m.noise_var = Variable(transformation=PositiveTransformation())
    m.Y = SVGP.define_variable(m.X, kern, m.noise_var...)
    

    I'd like something more like

    m = Model()
    m.X = Variable()
    m.noise_var = Variable(transformation=PositiveTransformation())
    m.likelihood = NormalGPLikelihood(variance)
    # This could also be: 
    # m.likelihood = BernoulliGPLikelihood()
    m.Y = SVGP.define_variable(m.X, kern, m.likelihood...)
    

    These likelihoods would need to be able to compute: where p(f) is Gaussian.

    class BernoulliGPLikelihood(GPLikelihood):
        # This class would also contain the transformation of the 
        # latent function to the interval [0, 1] I think
       def expectation(self, mean, variance, data): 
           # This method would implement the quadrature rule to do this here
    
    class NormalGPLikelihood(GPLikelihood):
       def expectation(self, mean, variance, data): # think of a better name!
           # This method could use the analytically equation
    

    Describe alternatives you've considered Creating a new class might not be necessary, we might be able to reuse the current distribution objects directly.

    I would be happy to implement this, if we come up with a design which people are happy with first.

    opened by marpulli 0
  • Inference.run throws when passing Variables of type PARAMETER as 'constants' argument

    Inference.run throws when passing Variables of type PARAMETER as 'constants' argument

    Describe the bug

    When passing Variables of type PARAMETER as 'constants' argument to the constructor of Inference (or child classes), this line throws during the execution of Inference's run method since the Variables that were passed as 'constants' are in self._var_trans but not in the 'kw' input argument.

    Expected behavior

    Execution does not throw and Variables passed as 'constants' argument to the Inference constructor, even if they are of type PARAMETER, are treated as constants and not optimized in training.

    Desktop:

    • OS: Ubuntu 18.04.2
    • Python version: 3.6
    • MXNet version: 1.4.1
    • MXFusion version: 0.3.1
    • MXNet context: CPU
    bug 
    opened by pabfer 2
  • Execution throws if a Kernel Variable is set to CONSTANT

    Execution throws if a Kernel Variable is set to CONSTANT

    Describe the bug

    If a kernel Variable is of CONSTANT type, fetch_parameters throws here since CONSTANT Variables do not make it to params input argument.

    Expected behavior

    Execution does not throw, and kernel Variables that are of type CONSTANT are kept constant and not optimized in training.

    Desktop:

    • OS: Ubuntu 18.04.2
    • Python version: 3.6
    • MXNet version: 1.4.1
    • MXFusion version: 0.3.1
    • MXNet context: CPU
    bug 
    opened by pabfer 1
  • Unexplicit throw in MinibatchInferenceLoop if batch_size is larger than dataset size

    Unexplicit throw in MinibatchInferenceLoop if batch_size is larger than dataset size

    Describe the bug

    If batch_size for MinibatchInferenceLoop is larger than the dataset size, this line throws due to division by zero.

    Expected behavior

    Possible mitigations are:

    • Option No. 1: Default batch size to min("requested batch size", "dataset size") and issue a warning if "requested batch size" > "dataset size".
    • Option No. 2: Throw with a message stating why it's throwing and how to fix the issue.

    Desktop:

    • OS: Ubuntu 18.04.2
    • Python version: 3.6
    • MXNet version: 1.4.1
    • MXFusion version: 0.3.1
    • MXNet context: CPU
    bug Easy 
    opened by pabfer 1
Releases(v0.3.1)
  • v0.3.1(May 30, 2019)

    • Added SVGP regression notebook.
    • Updated the VAE and GP regression notebook.
    • Removed the dependency on scikit-learn.
    • Moved the parameters of Gluon block to be controlled by MXFusion.
    • Fixed a bug in mini-batch learning.
    • Extended the SVGPRegression module to take samples as input variables.
    • Documentation and stylistic edits.
    • Merged in the PILCO Changes
    Source code(tar.gz)
    Source code(zip)
  • v0.3.0(Feb 20, 2019)

    The bigger changes are:

    • #133 Changing variable broadcasting for factors
    • #153 Simplify serialization to use a simple zip file

    Other than that its mostly bug fixes and documentation changes.

    • #144 Add details to inference and serialization documentation
    • #148 Fix a bug for SVGP regression with minibatch traning
    • #81 Reduce num_samples in uniform non-mock test to 1000
    • #137 Changed transpose property to function.
    • #135 Bug fix in function evaluation
    Source code(tar.gz)
    Source code(zip)
  • v0.2.1(Nov 9, 2018)

    • Add the tutorial for Gaussian process regression.
    • Fix empty operator bug.
    • Fix bug to allow the same variable for multiple inputs to a factor.
    • Add module serialization.
    • Fix the bug that causes the failure of documentation compilation.
    • Fix the bug: the inference methods of GP modules do not handle samples.
    • Update issue templates.
    • Add license headers to all files.
    • Add the getting started notebook.
    • Remove the dependency on future.
    • Update the Inference documentation.
    • Implement Dirichlet distribution.
    • Add logistic variable transformation.
    • Implement Expectation inference.
    • Fix the bugs related to dtype.
    • Validate shape of array variables.
    • Fix divide by zero error if max_iter < n_prints.
    • Add multiply kernel.
    Source code(tar.gz)
    Source code(zip)
  • v0.2.0(Oct 19, 2018)

    • Improve the printing messages of gradient loop and bug fix for GluonFunctionEvaluation
    • Add Gamma Distribution
    • GP Modules enhancement
    • Implement Module, GPRegression, SparseGPRegression and SVGPRegression…
    • Update the README
    • Add score function for variational inference enhancement
    • Refactor the interface of inferece for module
    • Add score function inference
    • Implement common MXNet operators (dot product, diag, etc. in MXFusion enhancement
    • Add support for MXNet operators.
    • Cleanup kernel function and function wrappers for copying enhancement
    • Implement the base class MXFusionFunction
    Source code(tar.gz)
    Source code(zip)
Owner
Amazon
Amazon
YouRefIt: Embodied Reference Understanding with Language and Gesture

YouRefIt: Embodied Reference Understanding with Language and Gesture YouRefIt: Embodied Reference Understanding with Language and Gesture by Yixin Che

16 Jul 11, 2022
PyTorch implementation of an end-to-end Handwritten Text Recognition (HTR) system based on attention encoder-decoder networks

AttentionHTR PyTorch implementation of an end-to-end Handwritten Text Recognition (HTR) system based on attention encoder-decoder networks. Scene Text

Dmitrijs Kass 31 Dec 22, 2022
Official pytorch code for "APP: Anytime Progressive Pruning"

APP: Anytime Progressive Pruning Diganta Misra1,2,3, Bharat Runwal2,4, Tianlong Chen5, Zhangyang Wang5, Irina Rish1,3 1 Mila - Quebec AI Institute,2 L

Landskape AI 12 Nov 22, 2022
Use .csv files to record, play and evaluate motion capture data.

Purpose These scripts allow you to record mocap data to, and play from .csv files. This approach facilitates parsing of body movement data in statisti

21 Dec 12, 2022
Replication package for the manuscript "Using Personality Detection Tools for Software Engineering Research: How Far Can We Go?" submitted to TOSEM

tosem2021-personality-rep-package Replication package for the manuscript "Using Personality Detection Tools for Software Engineering Research: How Far

Collaborative Development Group 1 Dec 13, 2021
Real-time Joint Semantic Reasoning for Autonomous Driving

MultiNet MultiNet is able to jointly perform road segmentation, car detection and street classification. The model achieves real-time speed and state-

Marvin Teichmann 518 Dec 12, 2022
PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features

PRIN/SPRIN: On Extracting Point-wise Rotation Invariant Features Overview This repository is the Pytorch implementation of PRIN/SPRIN: On Extracting P

Yang You 17 Mar 02, 2022
Official implementation of Sparse Transformer-based Action Recognition

STAR Official implementation of S parse T ransformer-based A ction R ecognition Dataset download NTU RGB+D 60 action recognition of 2D/3D skeleton fro

Chonghan_Lee 15 Nov 02, 2022
[ICML 2021] Break-It-Fix-It: Learning to Repair Programs from Unlabeled Data

Break-It-Fix-It: Learning to Repair Programs from Unlabeled Data This repo provides the source code & data of our paper: Break-It-Fix-It: Unsupervised

Michihiro Yasunaga 86 Nov 30, 2022
The implement of papar "Enhanced Graph Learning for Collaborative Filtering via Mutual Information Maximization"

SIGIR2021-EGLN The implement of paper "Enhanced Graph Learning for Collaborative Filtering via Mutual Information Maximization" Neural graph based Col

15 Dec 27, 2022
Improving Object Detection by Label Assignment Distillation

Improving Object Detection by Label Assignment Distillation This is the official implementation of the WACV 2022 paper Improving Object Detection by L

Cybercore Co. Ltd 51 Dec 08, 2022
Official implementation of the ICML2021 paper "Elastic Graph Neural Networks"

ElasticGNN This repository includes the official implementation of ElasticGNN in the paper "Elastic Graph Neural Networks" [ICML 2021]. Xiaorui Liu, W

liuxiaorui 34 Dec 04, 2022
Pytorch implementation of our paper under review -- 1xN Pattern for Pruning Convolutional Neural Networks

1xN Pattern for Pruning Convolutional Neural Networks (paper) . This is Pytorch re-implementation of "1xN Pattern for Pruning Convolutional Neural Net

Mingbao Lin (林明宝) 29 Nov 29, 2022
This code is for our paper "VTGAN: Semi-supervised Retinal Image Synthesis and Disease Prediction using Vision Transformers"

ICCV Workshop 2021 VTGAN This code is for our paper "VTGAN: Semi-supervised Retinal Image Synthesis and Disease Prediction using Vision Transformers"

Sharif Amit Kamran 25 Dec 08, 2022
交互式标注软件,暂定名 iann

iann 交互式标注软件,暂定名iann。 安装 按照官网介绍安装paddle。 安装其他依赖 pip install -r requirements.txt 运行 git clone https://github.com/PaddleCV-SIG/iann/ cd iann python iann

294 Dec 30, 2022
Lightweight, Portable, Flexible Distributed/Mobile Deep Learning with Dynamic, Mutation-aware Dataflow Dep Scheduler; for Python, R, Julia, Scala, Go, Javascript and more

Apache MXNet (incubating) for Deep Learning Apache MXNet is a deep learning framework designed for both efficiency and flexibility. It allows you to m

The Apache Software Foundation 20.2k Jan 08, 2023
Bridging Vision and Language Model

BriVL BriVL (Bridging Vision and Language Model) 是首个中文通用图文多模态大规模预训练模型。BriVL模型在图文检索任务上有着优异的效果,超过了同期其他常见的多模态预训练模型(例如UNITER、CLIP)。 BriVL论文:WenLan: Bridgi

235 Dec 27, 2022
Unofficial implementation of Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segmentation

Point-Unet This is an unofficial implementation of the MICCAI 2021 paper Point-Unet: A Context-Aware Point-Based Neural Network for Volumetric Segment

Namt0d 9 Dec 07, 2022
Pytorch implementation of MLP-Mixer with loading pre-trained models.

MLP-Mixer-Pytorch PyTorch implementation of MLP-Mixer: An all-MLP Architecture for Vision with the function of loading official ImageNet pre-trained p

Qiushi Yang 2 Sep 29, 2022