PyTorch implementation of normalizing flow models

Overview

Normalizing Flows

This is a PyTorch implementation of several normalizing flows, including a variational autoencoder. It is used in the articles A Gradient Based Strategy for Hamiltonian Monte Carlo Hyperparameter Optimization and Resampling Base Distributions of Normalizing Flows.

Implemented Flows

Methods of Installation

The latest version of the package can be installed via pip

pip install --upgrade git+https://github.com/VincentStimper/normalizing-flows.git

If you want to use a GPU, make sure that PyTorch is set up correctly by by following the instructions at the PyTorch website.

To run the example notebooks clone the repository first

git clone https://github.com/VincentStimper/normalizing-flows.git

and then install the dependencies.

pip install -r requirements_examples.txt
Comments
  • Replication of comparable glow with papers

    Replication of comparable glow with papers

    Hi, Thanks for developing this package. I find it very neat and flexible and would like to use it for my research. I noticed that in the paper "Resampling Base Distributions of Normalizing Flows", the bpd of your glow can reach 3.2~3.3, which is comparable to the original paper.

    I was wondering if it is possible to share your training scripts and details to train glow on cifar10 to achieve the above bpd. The current example notebook is too sketchy and only results in bpd of 3.8. Thanks very much!

    opened by prclibo 1
  • Changed ActNorm flag to buffer to allow saving

    Changed ActNorm flag to buffer to allow saving

    Without it being a buffer, when you load a model and sample from it, the flow thinks it is the first run through so overwrites all the trained ActNorm parameters.

    opened by arc82 1
  • Fix deprecation warning

    Fix deprecation warning

    Fixes the deprecation warning documented in https://github.com/VincentStimper/normalizing-flows/issues/12.


    Sanity check: Running this before the change:

    import normflows as nf
    import torch
    
    torch.manual_seed(42)
    
    flow = nf.NormalizingFlow(
        nf.distributions.DiagGaussian(1, trainable=False),
        [
            nf.flows.AutoregressiveRationalQuadraticSpline(1, 1, 1),
            nf.flows.LULinearPermute(1)
        ]
    )
    
    with torch.no_grad():
        samples_flow, _ = flow.sample(4)
    
    print(samples_flow)
    

    gives:

    tensor([[0.4528],
            [0.6410],
            [0.5200],
            [0.5567]])
    

    After the change, the output stays the same.

    opened by timothygebhard 0
  • Sampling from flow raises deprecation warning

    Sampling from flow raises deprecation warning

    Running the following minimal example:

    import normflows as nf
    import torch
    
    torch.manual_seed(42)
    
    flow = nf.NormalizingFlow(
        nf.distributions.DiagGaussian(1, trainable=False),
        [
            nf.flows.AutoregressiveRationalQuadraticSpline(1, 1, 1),
            nf.flows.LULinearPermute(1)
        ]
    )
    
    with torch.no_grad():
        samples_flow, _ = flow.sample(4)
    
    print(samples_flow)
    

    raises a UserWarning about an upcoming deprecation:

    /Users/timothy/Desktop/normalizing-flows/normflows/flows/mixing.py:437: UserWarning: torch.triangular_solve is deprecated in favor of torch.linalg.solve_triangular and will be removed in a future PyTorch release.
    torch.linalg.solve_triangular has its arguments reversed and does not return a copy of one of the inputs.
    X = torch.triangular_solve(B, A).solution
    should be replaced with
    X = torch.linalg.solve_triangular(A, B). (Triggered internally at  /Users/runner/work/pytorch/pytorch/pytorch/aten/src/ATen/native/BatchLinearAlgebra.cpp:2189.)
      outputs, _ = torch.triangular_solve(
    

    I will submit a PR shortly that fixes the issue 🙂

    opened by timothygebhard 0
  • Vberenz/mkdocs

    Vberenz/mkdocs

    Added mkdocs structure, and refactored the docstrings (and applied black)

    to install the dependencies for documentation building:

    pip install -e ".[docs]"
    

    To see the doc:

    mkdocs serve
    

    This starts a live server. Modifications of the documentation are rendered live (excluded the modications to docstrings)

    To build the docs:

    mkdocs build
    

    this will create the site folder (including index.html)

    To expend the docs:

    Markdown files can be added in the docs folder, then the "nav" section of the mkdocs.yml file has to be updated, e.g.

    nav:
      - about: index.md
      - API: references.md
      - my other page: mymarkdown.md
    
    • good to know: markdown can be used in the docstrings.
    • apparently, deploying the documentation online on github after built is as simple as calling mkdocs gh-deploy (I did not try it yet)

    I still need to do:

    • continuous build on github (documentation is rebuilt and deployed at each merge into master)
    • a correction pass on the docstrings (I updated them, but did not check them one by one yet)
    • the layout is not so nice (especially for the API), needs to be improved
    • apparently mkdocs allows to display jupyter notebooks, I need to dig
    opened by vincentberenz 0
  • feat: add optional gradient clipping to HMC flow

    feat: add optional gradient clipping to HMC flow

    Add the option to clip the gradient of the target log prob within HMC. For some target distributions, the log prob may have some very large gradients which can cause numerical instability - the gradient clipping can help with this.

    opened by lollcat 0
  • Added minor fixes for bugs and warnings

    Added minor fixes for bugs and warnings

    This commit made three changes to the original repo:

    1. Fixes the warning regarding the 'is' keyword:
    /home/donglin/Github/normalizing-flows/normflow/nets.py:45: SyntaxWarning: "is" with a literal. Did you mean "=="?
      if output_fn is "sigmoid":
    /home/donglin/Github/normalizing-flows/normflow/nets.py:47: SyntaxWarning: "is" with a literal. Did you mean "=="?
      elif output_fn is "relu":
    /home/donglin/Github/normalizing-flows/normflow/nets.py:49: SyntaxWarning: "is" with a literal. Did you mean "=="?
      elif output_fn is "tanh":
    /home/donglin/Github/normalizing-flows/normflow/nets.py:51: SyntaxWarning: "is" with a literal. Did you mean "=="?
      elif output_fn is "clampexp":
    
    1. Fixes the warning regarding 'torch.qr':
    /home/donglin/Github/normalizing-flows/normflow/flows.py:616: UserWarning: torch.qr is deprecated in favor of torch.linalg.qr and will be removed in a future PyTorch release.
    The boolean parameter 'some' has been replaced with a string parameter 'mode'.
    Q, R = torch.qr(A, some)
    should be replaced with
    Q, R = torch.linalg.qr(A, 'reduced' if some else 'complete') (Triggered internally at  /opt/conda/conda-bld/pytorch_1623448278899/work/aten/src/ATen/native/BatchLinearAlgebra.cpp:1940.)
      Q = torch.qr(torch.randn(self.num_channels, self.num_channels))[0]
    
    1. Eliminated the "nf.util.ToDevice" call during the data pre-processing in glow.ipynb
    opened by Donglin-Wang2 0
  • RuntimeError: output with shape [32] doesn't match the broadcast shape [1, 32]

    RuntimeError: output with shape [32] doesn't match the broadcast shape [1, 32]

    Hi, when doing experiments, I'd suggest doing some other tutorials, for example for the ClassCondFlow, as while trying to one on my own, I keep encountering this error.

    opened by maulberto3 0
  • Normalizing Flow vs Normalizing Flow VAE behavior

    Normalizing Flow vs Normalizing Flow VAE behavior

    I can't help but to wonder why the NormalizingFlow class use the flows' inverse method when computing forward_kl, but, on the contrary, when using the NormalizingFlowVAE, it uses the flows' forward method.

    This way, when trying to fit MNIST with NormalizingFlow, when training and passing a batch of say (64, 784) images I get the following error:

         34 for i in range(len(self.flows) - 1, -1, -1):
         35     z, log_det = self.flows[i].inverse(z)
    ---> 36     log_q += log_det
         37 log_q += self.q0.log_prob(z)
         38 return -torch.mean(log_q)
    
    RuntimeError: output with shape [64] doesn't match the broadcast shape [1, 64]
    

    Any help/suggestion?

    opened by maulberto3 0
  • Inconsistency between log_q and log_p in Encoder and NormalizingFlowVAE

    Inconsistency between log_q and log_p in Encoder and NormalizingFlowVAE

    In NormalizingFlowVAE class in core.py, this line says that the encoder outputs log_q:

    z, log_q = self.q0(x, num_samples=num_samples)

    Suppose that, as in this example, the encoder Gaussian (q0) is parameterized by an MLP. Looking at distributions.encoder.py source code, the forward method of NNDiagGaussian class says that it outputs log_p:

    return z, log_p

    Inconsistency or not?

    opened by maulberto3 0
  • NormalizingFlow class in core.py does not provide context in forward_kld

    NormalizingFlow class in core.py does not provide context in forward_kld

    Thank you for a repo that's easy to handle with a normalizing flow of one's choice!

    I would like to implement a normalizing flow that optimizes multiple target distributions at once depending on the context I would provide to it. Yet, currently, afai, no context can be providided in the .forward_kld method of the NormalizingFlow class.

    Would be great if that's added!

    Cheers,

    Yves

    opened by ybernaerts 3
  • Decoupled sampling and generation interfaces

    Decoupled sampling and generation interfaces

    Hi, this PR added some new interface to get better control during sampling, e.g. to repeatedly generate on same latent code when training the model. Please check if it is useful as a merge:)

    opened by prclibo 0
Releases(v1.5)
  • v1.5(Dec 21, 2022)

    A rendered documentation is added to the repository, which is available on https://vincentstimper.github.io/normalizing-flows/.

    Test were added for several flow modules, which can be run via pytest. With these new tests, several bugs were detected and fixed. The current coverage is about 61%. More tests will be added in the future as well as automated testing and coverage analysis on GitHub.

    Moreover, the code is adapted to the syntax of newer PyTorch Versions.

    Source code(tar.gz)
    Source code(zip)
  • v1.4(Jul 26, 2022)

    The package is now available on PyPI, which means that it can just be installed with

    pip install normflows
    

    from now on. The code was reformatted to conform to the black coding style.

    Moreover, the following fixes and additions are included:

    • The computation of the alpha-divergence objective was corrected.
    • A bug regarding sampling from the mixture of Gaussian base distribution was fixed.
    • A flow layer to warp periodic variables was added.
    • The dependency from the Residual Flow repository was removed.
    Source code(tar.gz)
    Source code(zip)
  • v1.2(Apr 5, 2022)

    The code was reorganized to be more hierarchical and readable. Also all required functionality for Neural Spline Flows were added to the repository to remove the dependency on the original Neural Spline Flow repository.

    Furthermore, the following features were introduced:

    • Class to reverse a flow layer
    • Class to build a chain of flow layers
    • Affine Masked Autoregressive Flows (MAF)
    • Circular Neural Spline Flows
    • Neural Spline Flows with circular and non-circular coordinates
    Source code(tar.gz)
    Source code(zip)
  • v1.1(Feb 6, 2022)

  • v1.0(Nov 25, 2021)

    Normalizing flow library comprising the most popular flow architectures, among them Real NVP, Glow, Neural Spline Flow, and Residual Flow.

    Source code(tar.gz)
    Source code(zip)
Owner
Vincent Stimper
PhD student in Machine Learning at the University of Cambridge and the Max Planck Institute for Intelligent Systems
Vincent Stimper
68 keypoint annotations for COFW test data

68 keypoint annotations for COFW test data This repository contains manually annotated 68 keypoints for COFW test data (original annotation of CFOW da

31 Dec 06, 2022
Official implementation for the paper: "Multi-label Classification with Partial Annotations using Class-aware Selective Loss"

Multi-label Classification with Partial Annotations using Class-aware Selective Loss Paper | Pretrained models Official PyTorch Implementation Emanuel

99 Dec 27, 2022
A repository built on the Flow software package to explore cyber-security attacks on intelligent transportation systems.

A repository built on the Flow software package to explore cyber-security attacks on intelligent transportation systems.

George Gunter 4 Nov 14, 2022
Deep Learning: Architectures & Methods Project: Deep Learning for Audio Super-Resolution

Deep Learning: Architectures & Methods Project: Deep Learning for Audio Super-Resolution Figure: Example visualization of the method and baseline as a

Oliver Hahn 16 Dec 23, 2022
[CVPR 2021] VirTex: Learning Visual Representations from Textual Annotations

VirTex: Learning Visual Representations from Textual Annotations Karan Desai and Justin Johnson University of Michigan CVPR 2021 arxiv.org/abs/2006.06

Karan Desai 533 Dec 24, 2022
Memory-Augmented Model Predictive Control

Memory-Augmented Model Predictive Control This repository hosts the source code for the journal article "Composing MPC with LQR and Neural Networks fo

Fangyu Wu 1 Jun 19, 2022
meProp: Sparsified Back Propagation for Accelerated Deep Learning

meProp The codes were used for the paper meProp: Sparsified Back Propagation for Accelerated Deep Learning with Reduced Overfitting (ICML 2017) [pdf]

LancoPKU 107 Nov 18, 2022
Fast, modular reference implementation of Instance Segmentation and Object Detection algorithms in PyTorch.

Faster R-CNN and Mask R-CNN in PyTorch 1.0 maskrcnn-benchmark has been deprecated. Please see detectron2, which includes implementations for all model

Facebook Research 9k Jan 04, 2023
SwinTrack: A Simple and Strong Baseline for Transformer Tracking

SwinTrack This is the official repo for SwinTrack. A Simple and Strong Baseline Prerequisites Environment conda (recommended) conda create -y -n SwinT

LitingLin 196 Jan 04, 2023
Simulation of the solar system using various nummerical methods

solar-system Simulation of the solar system using various nummerical methods Download the repo Make shure matplotlib, scipy etc. are installed execute

Caspar 7 Jul 15, 2022
DAT4 - General Assembly's Data Science course in Washington, DC

DAT4 Course Repository Course materials for General Assembly's Data Science course in Washington, DC (12/15/14 - 3/16/15). Instructors: Sinan Ozdemir

Kevin Markham 779 Dec 25, 2022
Applicator Kit for Modo allow you to apply Apple ARKit Face Tracking data from your iPhone or iPad to your characters in Modo.

Applicator Kit for Modo Applicator Kit for Modo allow you to apply Apple ARKit Face Tracking data from your iPhone or iPad with a TrueDepth camera to

Andrew Buttigieg 3 Aug 24, 2021
Lane assist for ETS2, built with the ultra-fast-lane-detection model.

Euro-Truck-Simulator-2-Lane-Assist Lane assist for ETS2, built with the ultra-fast-lane-detection model. This project was made possible by the amazing

36 Jan 05, 2023
A tiny, pedagogical neural network library with a pytorch-like API.

candl A tiny, pedagogical implementation of a neural network library with a pytorch-like API. The primary use of this library is for education. Use th

Sri Pranav 3 May 23, 2022
MINIROCKET: A Very Fast (Almost) Deterministic Transform for Time Series Classification

MINIROCKET: A Very Fast (Almost) Deterministic Transform for Time Series Classification

187 Dec 26, 2022
Demo notebooks for Qiskit application modules demo sessions (Oct 8 & 15):

qiskit-application-modules-demo-sessions This repo hosts demo notebooks for the Qiskit application modules demo sessions hosted on Qiskit YouTube. Par

Qiskit Community 46 Nov 24, 2022
Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning

H-Transformer-1D Implementation of H-Transformer-1D, Transformer using hierarchical Attention for sequence learning with subquadratic costs. For now,

Phil Wang 123 Nov 17, 2022
Source code release of the paper: Knowledge-Guided Deep Fractal Neural Networks for Human Pose Estimation.

GNet-pose Project Page: http://guanghan.info/projects/guided-fractal/ UPDATE 9/27/2018: Prototxts and model that achieved 93.9Pck on LSP dataset. http

Guanghan Ning 83 Nov 21, 2022
GeoMol: Torsional Geometric Generation of Molecular 3D Conformer Ensembles

GeoMol: Torsional Geometric Generation of Molecular 3D Conformer Ensembles This repository contains a method to generate 3D conformer ensembles direct

127 Dec 20, 2022
AI virtual gym is an AI program which can be used to exercise and can be used to see if we are doing the exercises

AI virtual gym is an AI program which can be used to exercise and can be used to see if we are doing the exercises

4 Feb 13, 2022