FlowTorch is a PyTorch library for learning and sampling from complex probability distributions using a class of methods called Normalizing Flows

Overview

Copyright © Meta Platforms, Inc

This source code is licensed under the MIT license found in the LICENSE.txt file in the root directory of this source tree.

Overview

FlowTorch is a PyTorch library for learning and sampling from complex probability distributions using a class of methods called Normalizing Flows.

Installing

An easy way to get started is to install from source:

git clone https://github.com/facebookincubator/flowtorch.git
cd flowtorch
pip install -e .

Further Information

We refer you to the FlowTorch website for more information about installation, using the library, and becoming a contributor. Here is a handy guide:

Comments
  • Ported Jacobian and inverse tests for Bijector from Pyro

    Ported Jacobian and inverse tests for Bijector from Pyro

    This PR, ports the two most important (and complex) tests from Pyro for bijectors: comparing the numerical Jacobian to the analytical one, and confirming that the Bijector.inverse method is correct for invertible bijectors.

    enhancement CLA Signed Merged 
    opened by stefanwebb 20
  • Lazy parameters and bijectors with metaclasses

    Lazy parameters and bijectors with metaclasses

    Motivation

    Shape information for a normalizing flow only becomes known when the base distribution has been specified. We have been searching for an ideal solution to express the delayed instantiation of Bijector and Params for this purpose. Several possible solutions are outlined in #57.

    Changes proposed

    The purpose of this PR is to showcase a prototype for a solution that uses metaclasses to express delayed instantiation. This works by intercepting .__call__ when a class initiated and returning a lazy wrapper around the class and bound arguments if only partial arguments are given to .__init__. If all arguments are given then the actual object is initialized. The lazy wrapper can have additional arguments bound to it, and will only become non-lazy when all the arguments are filled (or have defaults).

    enhancement CLA Signed refactor 
    opened by stefanwebb 16
  • Docusaurus v2/API docs integration + Meta rebranding

    Docusaurus v2/API docs integration + Meta rebranding

    Motivation

    The API docs are currently lacking content and use an inflexible system to specify which modules to include.

    Also, I am unable to make the repo public until I have rebranded FB as Meta Platforms

    Changes proposed

    I have integrated the new API markdown autogen tool with Docusaurus v2 styling into the website. It uses a general configuration file with regular expressions to specify what to include/exclude, displays a box/label for each symbol, plus it's signature if it has one and its raw docstring.

    The remaining tasks are parsing/formatting the docstring, adding symbol lists to module pages, and some small cosmetic fixes.

    I also completed the Meta rebranding in the copyright notices etc.

    Test Plan

    cd website
    yarn build
    yarn serve
    
    CLA Signed 
    opened by stefanwebb 12
  • Autogenerating imports for for `flowtorch.parameters` and `flowtorch.bijectors`

    Autogenerating imports for for `flowtorch.parameters` and `flowtorch.bijectors`

    Motivation

    It is tiresome to have to add new components to init.py for bijectors, distributions and parameters. We should be able to automatically generate it!

    Changes proposed

    Autogen for distributions was completed in a previous PR. This one completes it for parameters and bijectors.

    I also uncovered and fixed a bug in how utils.list_bijectors(), utils.list_distributions(), and utils.list_parameters were working

    CLA Signed Merged 
    opened by stefanwebb 10
  • First sample scripts

    First sample scripts

    Motivation

    We would like a number of example scripts to demonstrate how to use FlowTorch.

    Changes proposed

    I have created a new folder, /samples, and added the simple example from the landing page of the website. At the moment it is a Python script, although I think in the future they will be converted into Jupyter notebooks that are mirrored on Colab.

    Test Plan

    The sample plots figures that demonstrate learning is working.

    CLA Signed Merged 
    opened by stefanwebb 10
  • Parameterless bijectors

    Parameterless bijectors

    This PR migrates code from pyro.distributions.transforms and torch.distributions.transforms for parameterless bijectors.

    These are easy so I wanted to get them all over now!

    enhancement CLA Signed Merged 
    opened by stefanwebb 10
  • Empty params class: `flowtorch.params.Empty`

    Empty params class: `flowtorch.params.Empty`

    This PR adds a flowtorch.params.Empty class that will be used for flowtorch.bijectors.FixedBijector bijectors like Sigmoid, Exp, etc. that don't have any learnable parameters.

    I have fixed a number of other things in order to get all the tests running!

    enhancement CLA Signed 
    opened by stefanwebb 10
  • Autoregressive Bijector type

    Autoregressive Bijector type

    Motivation

    See #22 and #6.

    Changes proposed

    This PR implements a new bijectors.Autoregressive meta bijector. We then refactor bijectors.AffineAutoregressive as a class that inherits from bijectors.Affine and bijectors.Autoregressive.

    This change makes it easy to implement new autoregressive bijectors, like spline and neural autoregressive flow. All you have to do is implement the corresponding element-wise operator and inherit from the two

    CLA Signed Merged 
    opened by stefanwebb 9
  • Test that type hints are present for all Bijector classes' methods

    Test that type hints are present for all Bijector classes' methods

    Motivation

    mypy is excellent for checking types and preventing bugs, however it is not applied if type hints aren't declared for a function, method etc. Enforcing this via a unit test should lead to better code!

    Changes proposed

    I've written a unit test that will raise an exception when a methods arguments do not have type hints. Also, added stubs for additional tests on a Bijector/Params definition

    CLA Signed unit tests 
    opened by stefanwebb 9
  • Fixes pypi release, configured against test pypi

    Fixes pypi release, configured against test pypi

    Summary: Separates a pypi release workflow based off of github releases (these create tags so we dont get dev version numbers from setuptools_scm)

    Differential Revision: D28419348

    CLA Signed fb-exported Merged 
    opened by feynmanliang 9
  • CI installs stable Pytorch release

    CI installs stable Pytorch release

    Our CI was installing the nightly build of PyTorch, since 1.8.1 hadn't been released and we needed newly developed features in torch.distributions.constraints.

    Now that 1.8.1 is out, I have changed the config file to install the stable release.

    This PR contains the same changes from the flowtorch.params.Empty one so it can pass tests - @feynmanliang could you please merge the other one first? Then only the relevant changes should appear here

    CLA Signed Merged 
    opened by stefanwebb 9
  • Multivariate Bijectors Tutorial issue

    Multivariate Bijectors Tutorial issue

    Issue Description

    The Multivariate Bijectors tutorial notebook has an issue: someone hit a keyboard interrupt and so it's not complete.

    Steps to Reproduce

    No steps to reproduce needed, here's a snapshot stright from this github (https://github.com/facebookincubator/flowtorch/blob/main/tutorials/multivariate_bijections.ipynb) image

    Expected Behavior

    Users should expect the tutorial to be complete.

    opened by maulberto3 1
  • Issue with log_prob values not exported to Cuda

    Issue with log_prob values not exported to Cuda

    Issue Description

    A clear and concise description of the issue. If it's a feature request, please add [Feature Request] to the title.

    Not able to get all the data into 'device (CUDA)'. Facing problem at 'loss = -dist_y.log_prob(data).mean()'. Looks like data cant be transferred to GPU. Do we need to regester data as buffer and work around it?

    Error: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:1! (when checking argument for argument mat1 in method wrapper_addmm)

    Steps to Reproduce

    Please provide steps to reproduce the issue attaching any error messages and stack traces.

    dataset = torch.tensor(data_train, dtype=torch.float)
    trainloader = torch.utils.data.DataLoader(dataset, batch_size=1024)
    for steps in range(t_steps):
        step_loss=0
        for i, data in enumerate(trainloader):
            data = data.to(device)
            if i==0:
                print(data.shape)
                #p_getsizeof(data)
            try:
                optimizer.zero_grad()
                loss = -dist_y.log_prob(data).mean()
                loss.backward()
                optimizer.step()
            except ValueError as e:
                print('Error')
                print('Skipping thatbatch')
    

    Expected Behavior

    What did you expect to happen?

    Matrices should be computated in the CUDA device and not show a conflit of data being at 2 different place.

    System Info

    Please provide information about your setup

    • PyTorch Version (run print(torch.__version__)
    • Python version

    Additional Context

    opened by bigmb 2
  • [WIP] Conv1x1

    [WIP] Conv1x1

    Motivation

    Proposes a 1x1 convolution bijector.

    Test Plan

    from flowtorch.bijectors import Conv1x1Bijector
    import torch
    
    
    def test(LU_decompose, zero_init):
        c = Conv1x1Bijector(LU_decompose=LU_decompose, zero_init=zero_init)
        c = c(shape=torch.Size([10, 20, 20]))
        for p in c.parameters():
            p.data += torch.randn_like(p)/5
    
        x = torch.randn(1, 10, 20, 20)
        y = c.forward(x)
        yp = y.detach_from_flow()
        xp = c.inverse(yp)
        assert (xp/x-1).norm() < 1e-2
    
        assert (xp-x).norm()
        
    for LU_decompose in (True, False):
        for zero_init in (True, False):
            test(True, True)
    
    

    Important

    This PR is branched out from the coupling layer. I'll update the branch once the review of the coupling layer is completed.

    CLA Signed 
    opened by vmoens 0
  • Split Bijector

    Split Bijector

    Motivation

    We introduce the Split Bijector, which allows to split a tensor in half, process one half through a sequence of transformations and normalize the other.

    Changes proposed

    The new class first splits the tensor, then passes the outputs to the _param_fn and then to the transform itself. The introduction of a _forward_pre_ops and _inverse_pre_ops methods is necessary as, in the inverse case, we need to first pass the input through the transform inverse to then pass it through the convolutional layer that will give us the normalizing constants. This breaks the _param_fb(...) -> _inverse(...) logic, as we need to do something before _param_fn. As this might be the case for the forward pass too, we introduced a similar _forward_pre_ops method.

    Types of changes

    • [ ] Docs change / refactoring / dependency upgrade
    • [ ] Bug fix (non-breaking change which fixes an issue)
    • [x] New feature (non-breaking change which adds functionality)
    • [ ] Breaking change (fix or feature that would cause existing functionality to change)

    Checklist

    • [x] My code follows the code style of this project.
    • [x] My change requires a change to the documentation.
    • [ ] I have updated the documentation accordingly.
    • [x] I have read the CONTRIBUTING document.
    • [ ] I have added tests to cover my changes.
    • [ ] All new and existing tests passed.
    • [ ] The title of my pull request is a short description of the requested changes.
    CLA Signed 
    opened by vmoens 1
  • Split bijector

    Split bijector

    A splitting bijector splits an input x in two equal parts, x1 and x2 (see for instance Glow paper): image

    Of those, only x1 is passed to the remaining part of the flow. x2 on the other hand is "normalized" by a location and scale determined by x1. The transform usually looks like this

    def _forward(self, x):
        x1, x2 = x.chunk(2, -1)
        loc, scale = some_parametric_fun(x1)
        x2 = (x2 - loc) / scale
        log_abs_det_jacobian = scale.reciprocal().log().sum()  # part of the jacobian that accounts for the transform of x2
        log_abs_det_jacobian += self.normal.log_prob(x2).sum()  # since x2 will disappear, we can include its prior log-lik here
        return x1, log_abs_det_jacobian
    

    The _inverse is done like this

    def _inverse(self, y):
        x1 = y
        loc, scale = some_parametric_fun(x1)
        x2 = torch.randn_like(x1)  # since we fit x2 to a gaussian in forward
        log_abs_det_jacobian += self.normal.log_prob(x2).sum()  
        x2 = x2 * scale + loc
        log_abs_det_jacobian = scale.reciprocal().log().sum()  
        return torch.cat([x1, x2], -1), log_abs_det_jacobian
    

    However, I personally find this coding very confusing: First and foremost, it messes up with the logic y = flow(x) -> dist.log_prob(y). What if we don't want a normal? That seems orthogonal to the bijector responsibility to me. Second, it includes in the LADJ a normal log-likelihood, which should come from the prior. Third, it makes the _inverse stochastic, but that should not be the case. Finally, it has an input of -- say -- dimension d and an output of d/2 (and conversely for _inverse).

    For some models (e.g. Glow), when generating data, we don't sample from a Gaussian with unit variance but from a Gaussian with some decreased temperature (e.g. an SD of 0.9 or something). With this logic, we'd have to tell every split layer in a flow to modify the self.normal scale!

    What I would suggest is this: we could use SplitBijector as a wrapper around another bijector. The way that would work is this:

    class SplitBijector(Bijector):
        def __init__(self, bijector):
             ...
             self.bijector = bijector
    
        def _forward(self, x):
            x1, x2 = x.chunk(2, -1)
            loc, scale = some_parametric_fun(x1)
            y2 = (x2 - loc) / scale
            log_abs_det_jacobian = scale.reciprocal().log().sum()  # part of the jacobian that accounts for the transform of x2
            y1 = self.bijector.forward(x1)
            log_abs_det_jacobian += self.bijector.log_abs_det_jacobian(x1, y1)
            y = torch.cat([y1, y2], 0)
            return y, log_abs_det_jacobian
    

    The _inverse would follow. Of course bijector must have the same input and output space! That way, we solve all of our problems: input and output space match, no weird stuff happen with a nested normal log-density, the prior density is only called out of the bijector, and one can tweak it at will without caring about what will happen in the bijector.

    enhancement 
    opened by vmoens 1
Releases(0.8)
  • 0.8(Apr 27, 2022)

    • Fixed a bug in distributions.Flow.parameters() where it returned duplicate parameters
    • Several tutorials converted from .mdx to .ipynb format in anticipation of new tutorial system
    • Removed yarn.lock
    Source code(tar.gz)
    Source code(zip)
  • 0.7(Apr 25, 2022)

    This release add two new minor features.

    A new class flowtorch.bijectors.Invert can be used to swap the forward and inverse operator of a Bijector. This is useful to turn, for example, Inverse Autoregressive Flow (IAF) into Masked Autoregressive Flow (MAF).

    Bijector objects are now nn.Modules, which amongst other benefits allows easily saving and loading of state.

    Source code(tar.gz)
    Source code(zip)
  • 0.6(Mar 3, 2022)

    This small release fixes a bug in bijectors.ops.Spline where the sign of log(det(J)) was inverted for the .inverse method. It also fixes the unit tests so that they pick up this error in the future.

    Source code(tar.gz)
    Source code(zip)
  • 0.5(Feb 3, 2022)

    In this release, we add caching of intermediate values for Bijectors.

    What this means is that you can often reduce computation by calculating log|det(J)| at the same time as y = f(x). It's also useful for performing variational inference on Bijectors that don't have an explicit inverse. The mechanism by which this is achieved is a subclass of torch.Tensor called BijectiveTensor that bundles together (x, y, context, bundle, log_det_J).

    Special shout out to @vmoens for coming up with this neat solution and taking the implementation lead! Looking forward to your future contributions 🥳

    Source code(tar.gz)
    Source code(zip)
  • 0.4(Nov 18, 2021)

    Implementations of Inverse Autoregressive Flow and Neural Spline Flow.

    Basic content for website.

    Some unit tests for bijectors and distributions.

    Source code(tar.gz)
    Source code(zip)
Owner
Meta Incubator
We work hard to contribute our work back to the web, mobile, big data, & infrastructure communities. NB: members must have two-factor auth.
Meta Incubator
Art Project "Schrödinger's Game of Life"

Repo of the project "Team Creative Quantum AI: Schrödinger's Game of Life" Installation new conda env: conda create --name qcml python=3.8 conda activ

ℍ◮ℕℕ◭ℍ ℝ∈ᛔ∈ℝ 2 Sep 15, 2022
This is just a funny project that we want to see AutoEncoder (AE) can actually work to enhance the features we want

Funny_muscle_enhancer :) 1.Discription: This is just a funny project that we want to see AutoEncoder (AE) can actually work on the some features. We w

Jing-Yao Chen (Jacob) 8 Oct 01, 2022
Decensoring Hentai with Deep Neural Networks. Formerly named DeepMindBreak.

DeepCreamPy Decensoring Hentai with Deep Neural Networks. Formerly named DeepMindBreak. A deep learning-based tool to automatically replace censored a

616 Jan 06, 2023
The official repository for Deep Image Matting with Flexible Guidance Input

FGI-Matting The official repository for Deep Image Matting with Flexible Guidance Input. Paper: https://arxiv.org/abs/2110.10898 Requirements easydict

Hang Cheng 51 Nov 10, 2022
Model Agnostic Interpretability for Multiple Instance Learning

MIL Model Agnostic Interpretability This repo contains the code for "Model Agnostic Interpretability for Multiple Instance Learning". Overview Executa

Joe Early 10 Dec 17, 2022
ML-Decoder: Scalable and Versatile Classification Head

ML-Decoder: Scalable and Versatile Classification Head Paper Official PyTorch Implementation Tal Ridnik, Gilad Sharir, Avi Ben-Cohen, Emanuel Ben-Baru

189 Jan 04, 2023
Deep Ensemble Learning with Jet-Like architecture

Ransomware analysis using DEL with jet-like architecture comprising two CNN wings, a sparse AE tail, a non-linear PCA to produce a diverse feature space, and an MLP nose

Ahsen Nazir 2 Feb 06, 2022
Official implementation of particle-based models (GNS and DPI-Net) on the Physion dataset.

Physion: Evaluating Physical Prediction from Vision in Humans and Machines [paper] Daniel M. Bear, Elias Wang, Damian Mrowca, Felix J. Binder, Hsiao-Y

Hsiao-Yu Fish Tung 18 Dec 19, 2022
Learning-based agent for Google Research Football

TiKick 1.Introduction Learning-based agent for Google Research Football Code accompanying the paper "TiKick: Towards Playing Multi-agent Football Full

Tsinghua AI Research Team for Reinforcement Learning 90 Dec 26, 2022
A Streamlit component to render ECharts.

Streamlit - ECharts A Streamlit component to display ECharts. Install pip install streamlit-echarts Usage This library provides 2 functions to display

Fanilo Andrianasolo 290 Dec 30, 2022
Supplementary code for the experiments described in the 2021 ISMIR submission: Leveraging Hierarchical Structures for Few Shot Musical Instrument Recognition.

Music Trees Supplementary code for the experiments described in the 2021 ISMIR submission: Leveraging Hierarchical Structures for Few Shot Musical Ins

Hugo Flores García 32 Nov 22, 2022
HCQ: Hybrid Contrastive Quantization for Efficient Cross-View Video Retrieval

HCQ: Hybrid Contrastive Quantization for Efficient Cross-View Video Retrieval [toc] 1. Introduction This repository provides the code for our paper at

13 Dec 08, 2022
Pytorch implementation of Depth-conditioned Dynamic Message Propagation forMonocular 3D Object Detection

DDMP-3D Pytorch implementation of Depth-conditioned Dynamic Message Propagation forMonocular 3D Object Detection, a paper on CVPR2021. Instroduction T

Li Wang 32 Nov 09, 2022
Official repository for Automated Learning Rate Scheduler for Large-Batch Training (8th ICML Workshop on AutoML)

Automated Learning Rate Scheduler for Large-Batch Training The official repository for Automated Learning Rate Scheduler for Large-Batch Training (8th

Kakao Brain 35 Jan 04, 2023
Adaptive Prototype Learning and Allocation for Few-Shot Segmentation (CVPR 2021)

ASGNet The code is for the paper "Adaptive Prototype Learning and Allocation for Few-Shot Segmentation" (accepted to CVPR 2021) [arxiv] Overview data/

Gen Li 91 Dec 23, 2022
Visual Memorability for Robotic Interestingness via Unsupervised Online Learning (ECCV 2020 Oral and TRO)

Visual Interestingness Refer to the project description for more details. This code based on the following paper. Chen Wang, Yuheng Qiu, Wenshan Wang,

Chen Wang 36 Sep 08, 2022
Probabilistic Programming and Statistical Inference in PyTorch

PtStat Probabilistic Programming and Statistical Inference in PyTorch. Introduction This project is being developed during my time at Cogent Labs. The

Stefano Peluchetti 109 Nov 26, 2022
Real-Time and Accurate Full-Body Multi-Person Pose Estimation&Tracking System

News! Aug 2020: v0.4.0 version of AlphaPose is released! Stronger tracking! Include whole body(face,hand,foot) keypoints! Colab now available. Dec 201

Machine Vision and Intelligence Group @ SJTU 6.7k Dec 28, 2022
Attentive Implicit Representation Networks (AIR-Nets)

Attentive Implicit Representation Networks (AIR-Nets) Preprint | Supplementary | Accepted at the International Conference on 3D Vision (3DV) teaser.mo

29 Dec 07, 2022
Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation

DynaBOA Code repositoty for the paper: Out-of-Domain Human Mesh Reconstruction via Dynamic Bilevel Online Adaptation Shanyan Guan, Jingwei Xu, Michell

197 Jan 07, 2023