AlgoVision - A Framework for Differentiable Algorithms and Algorithmic Supervision

Overview

AlgoVision - A Framework for Differentiable Algorithms and Algorithmic Supervision

AlgoVision

This repository includes the official implementation of our NeurIPS 2021 Paper "Learning with Algorithmic Supervision via Continuous Relaxations" (Paper @ ArXiv, Video @ Youtube).

algovision is a Python 3.6+ and PyTorch 1.9.0+ based library for making algorithms differentiable. It can be installed via:

pip install algovision

Applications include smoothly integrating algorithms into neural networks for algorithmic supervision, problem-specific optimization within an algorithm, and whatever your imagination allows. As algovision relies on PyTorch it also supports CUDA, etc.

Check out the Documentation!

🌱 Intro

Deriving a loss from a smooth algorithm can be as easy as

from examples import get_bubble_sort
import torch

# Get an array (the first dimension is the batch dimension, which is always required)
array = torch.randn(1, 8, requires_grad=True)

bubble_sort = get_bubble_sort(beta=5)
result, loss = bubble_sort(array)

loss.backward()
print(array)
print(result)
print(array.grad)

Here, the loss is a sorting loss corresponding to the number of swaps in the bubble sort algorithm. But we can also define this algorithm from scratch:

from algovision import (
    Algorithm, Input, Output, Var, VarInt,                                          # core
    Let, LetInt, Print,                                                     # instructions
    Eq, NEq, LT, LEq, GT, GEq, CatProbEq, CosineSimilarity, IsTrue, IsFalse,  # conditions
    If, While, For,                                                   # control_structures
    Min, ArgMin, Max, ArgMax,                                                  # functions
)
import torch

bubble_sort = Algorithm(
    # Define the variables the input corresponds to
    Input('array'),
    # Declare and initialize all differentiable variables 
    Var('a',        torch.tensor(0.)),
    Var('b',        torch.tensor(0.)),
    Var('swapped',  torch.tensor(1.)),
    Var('loss',     torch.tensor(0.)),
    # Declare and initialize a hard integer variable (VarInt) for the control flow.
    # It can be defined in terms of a lambda expression. The required variables
    # are automatically inferred from the signature of the lambda expression.
    VarInt('n', lambda array: array.shape[1] - 1),
    # Start a relaxed While loop:
    While(IsTrue('swapped'),
        # Set `swapped` to 0 / False
        Let('swapped', 0),
        # Start an unrolled For loop. Corresponds to `for i in range(n):`
        For('i', 'n',
            # Set `a` to the `i`th element of `array`
            Let('a', 'array', ['i']),
            # Using an inplace lambda expression, we can include computations 
            # based on variables to obtain the element at position i+1. 
            Let('b', 'array', [lambda i: i+1]),
            # An If-Else statement with the condition a > b
            If(GT('a', 'b'),
               if_true=[
                   # Set the i+1 th element of array to a
                   Let('array', [lambda i: i + 1], 'a'),
                   # Set the i th element of array to b
                   Let('array', ['i'], 'b'),
                   # Set swapped to 1 / True
                   Let('swapped', 1.),
                   # Increment the loss by 1 using a lambda expression
                   Let('loss', lambda loss: loss + 1.),
               ]
           ),
        ),
        # Decrement the hard integer variable n by 1
        LetInt('n', lambda n: n-1),
    ),
    # Define what the algorithm should return
    Output('array'),
    Output('loss'),
    # Set the inverse temperature beta
    beta=5,
)

👾 Full Instruction Set

(click to expand)

The full set of modules is:

from algovision import (
    Algorithm, Input, Output, Var, VarInt,                                          # core
    Let, LetInt, Print,                                                     # instructions
    Eq, NEq, LT, LEq, GT, GEq, CatProbEq, CosineSimilarity, IsTrue, IsFalse,  # conditions
    If, While, For,                                                   # control_structures
    Min, ArgMin, Max, ArgMax,                                                  # functions
)

Algorithm is the main class, Input and Output define arguments and return values, Var defines differentiable variables and VarInt defines non-differentiable integer variables. Eq, LT, etc. are relaxed conditions for If and While, which are respective control structures. For bounded loops of fixed length that are unrolled. Let sets a differentiable variable, LetInt sets a hard integer variable. Note that hard integer variables should only be used if they are independent of the input values, but they may depend on the input shape (e.g., for reducing the number of iterations after each traversal of a For loop). Print prints for debug purposes. Min, ArgMin, Max, and ArgMax return the element-wise min/max/argmin/argmax of a list of tensors (of equal shape).

λ Lambda Expressions

Key to defining an algorithm are lambda expressions (see here for a reference). They allow defining anonymous functions and therefore allow expressing computations in-place. In most cases in algovision, it is possible to write a value in terms of a lambda expressions. The name of the used variable will be inferred from the signature of the expression. For example, lambda x: x**2 will take the variable named x and return the square of it at the location where the expression is written.

Let('z', lambda x, y: x**2 + y) corresponds to the regular line of code z = x**2 + y. This also allows inserting complex external functions including neural networks as part of the lambda expression. Assuming net is a neural networks, one can write Let('y', lambda x: net(x)) (corresponding to y = net(x)).

Let

Let is a very flexible instruction. The following table shows the use cases of it.

AlgoVision Python Description
Let('a', 'x') a = x Variable a is set to the value of variable x.
Let('a', lambda x: x**2) a = x**2 As soon as we compute anything on the right hand side of the equation, we need to write it as a lambda expression.
Let('a', 'array', ['i']) a = array[i] Indexing on the right hand requires an additional list parameter after the second argument.
Let('a', lambda array, i: array[:, i]) a = array[i] Equivalent to the row above: indexing can also be manually done inside of a lambda expression. Note that in this case, the batch dimension has to be written explicitly.
Let('a', 'array', ['i', lambda j: j+1]) a = array[i, j+1] Multiple indices and lambda expressions are also supported.
Let('a', 'array', [None, slice(0, None, 2)]) a = array[:, 0::2] None and slices are also supported.
Let('a', ['i'], 'x') a[i] = x Indexing can also be done on the left hand side of the equation.
Let('a', ['i'], 'x', ['j']) a[i] = x['j'] ...or on both sides.
Let(['a', 'b'], lamba x, y: (x+y, x-y)) a, b = x+y, x-y Multiple return values are supported.

In its most simple form Let obtains two arguments, a string naming the variable where the result is written, and the value that may be expressed via a lambda expression.

If the lambda expression returns multiple values, e.g., because a complex function is called and has two return values, the left argument can be a list of strings. That is, Let(['a', 'b'], lamba x, y: (x+y, x-y)) corresponds to a, b = x+y, x-y.

Let also supports indexing. This is denoted by an additional list argument after the left and/or the right argument. For example, Let('a', 'array', ['i']) corresponds to a = array[i], while Let('array', ['i'], 'b') corresponds to array[i] = b. Let('array', ['i'], 'array', ['j']) corresponding to array[i] = array[j] is also supported.

Note that indexing can also be expressed through lambda expressions. For example, Let('a', 'array', ['i']) is equivalent to Let('a', lambda array, i: array[:, i]). Note how in this case the batch dimension has to be explicitly taken into account ([:, ]). Relaxed indexing on the right-hand side is only supported through lambda expressions due to its complexity. Relaxed indexing on the left-hand side is supported if exactly one probability weight tensor is in the list (e.g., Let('array', [lambda x: get_weights(x)], 'a')).

LetInt only supports setting the variable to an integer (Python int) or list of integers (as well as the same type via lambda expressions). Note that hard integer variables should only be used if they are independent of the input values, but they may depend on the input shape.

If you need help implementing your differentiable algorithm, you may schedule an appointment. This will also help me improve the documentation and usability.

🧪 Experiments

The experiments can be found in the experiments folder. Additional experiments will be added soon.

🔬 Sorting Supervision

The sorting supervision experiment can be run with

python experiments/train_sort.py

or by checking out this Colab notebook.

📖 Citing

If you used our library, please cite it as

@inproceedings{petersen2021learning,
  title={{Learning with Algorithmic Supervision via Continuous Relaxations}},
  author={Petersen, Felix and Borgelt, Christian and Kuehne, Hilde and Deussen, Oliver},
  booktitle={Conference on Neural Information Processing Systems (NeurIPS)},
  year={2021}
}

📜 License

algovision is released under the MIT license. See LICENSE for additional details.

Owner
Felix Petersen
Researcher @ University of Konstanz
Felix Petersen
[ICCV 2021] Official Tensorflow Implementation for "Single Image Defocus Deblurring Using Kernel-Sharing Parallel Atrous Convolutions"

KPAC: Kernel-Sharing Parallel Atrous Convolutional block This repository contains the official Tensorflow implementation of the following paper: Singl

Hyeongseok Son 50 Dec 29, 2022
Multi Task RL Baselines

MTRL Multi Task RL Algorithms Contents Introduction Setup Usage Documentation Contributing to MTRL Community Acknowledgements Introduction M

Facebook Research 171 Jan 09, 2023
RGBD-Net - This repository contains a pytorch lightning implementation for the 3DV 2021 RGBD-Net paper.

[3DV 2021] We propose a new cascaded architecture for novel view synthesis, called RGBD-Net, which consists of two core components: a hierarchical depth regression network and a depth-aware generator

Phong Nguyen Ha 4 May 26, 2022
PyTorch implementation of residual gated graph ConvNets, ICLR’18

Residual Gated Graph ConvNets April 24, 2018 Xavier Bresson http://www.ntu.edu.sg/home/xbresson https://github.com/xbresson https://twitter.com/xbress

Xavier Bresson 112 Aug 10, 2022
Live training loss plot in Jupyter Notebook for Keras, PyTorch and others

livelossplot Don't train deep learning models blindfolded! Be impatient and look at each epoch of your training! (RECENT CHANGES, EXAMPLES IN COLAB, A

Piotr Migdał 1.2k Jan 08, 2023
Unsupervised MRI Reconstruction via Zero-Shot Learned Adversarial Transformers

Official TensorFlow implementation of the unsupervised reconstruction model using zero-Shot Learned Adversarial TransformERs (SLATER). (https://arxiv.

ICON Lab 22 Dec 22, 2022
PyTorch implementation of the paper: Long-tail Learning via Logit Adjustment

logit-adj-pytorch PyTorch implementation of the paper: Long-tail Learning via Logit Adjustment This code implements the paper: Long-tail Learning via

Chamuditha Jayanga 53 Dec 23, 2022
To model the probability of a soccer coach leave his/her team during Campeonato Brasileiro for 10 chosen teams and considering years 2018, 2019 and 2020.

To model the probability of a soccer coach leave his/her team during Campeonato Brasileiro for 10 chosen teams and considering years 2018, 2019 and 2020.

Larissa Sayuri Futino Castro dos Santos 1 Jan 20, 2022
Make differentially private training of transformers easy for everyone

private-transformers This codebase facilitates fast experimentation of differentially private training of Hugging Face transformers. What is this? Why

Xuechen Li 73 Dec 28, 2022
Evolutionary Population Curriculum for Scaling Multi-Agent Reinforcement Learning

Evolutionary Population Curriculum for Scaling Multi-Agent Reinforcement Learning This is the code for implementing the MADDPG algorithm presented in

97 Dec 21, 2022
Pytorch code for semantic segmentation using ERFNet

ERFNet (PyTorch version) This code is a toolbox that uses PyTorch for training and evaluating the ERFNet architecture for semantic segmentation. For t

Edu 394 Jan 01, 2023
A PyTorch implementation of DenseNet.

A PyTorch Implementation of DenseNet This is a PyTorch implementation of the DenseNet-BC architecture as described in the paper Densely Connected Conv

Brandon Amos 771 Dec 15, 2022
Teaching end to end workflow of deep learning

Deep-Education This repository is now available for public use for teaching end to end workflow of deep learning. This implies that learners/researche

Data Lab at College of William and Mary 2 Sep 26, 2022
Code repository for "Stable View Synthesis".

Stable View Synthesis Code repository for "Stable View Synthesis". Setup Install the following Python packages in your Python environment - numpy (1.1

Intelligent Systems Lab Org 195 Dec 24, 2022
Face-Recognition-Attendence-System - This face recognition Attendence system using Python

Face-Recognition-Attendence-System I have developed this face recognition Attend

Riya Gupta 4 May 10, 2022
This porject is intented to build the most accurate model for predicting the porbability of loan default

Estimating-Loan-Default-Probability IBA ML2 Mid-project / Kaggle Competition This porject is intented to build the most accurate model for predicting

Adil Gahramanov 1 Jan 24, 2022
We will see a basic program that is basically a hint to brute force attack to crack passwords. In other words, we will make a program to Crack Any Password Using Python. Show some ❤️ by starring this repository!

Crack Any Password Using Python We will see a basic program that is basically a hint to brute force attack to crack passwords. In other words, we will

Ananya Chatterjee 11 Dec 03, 2022
Tools for robust generative diffeomorphic slice to volume reconstruction

RGDSVR Tools for Robust Generative Diffeomorphic Slice to Volume Reconstructions (RGDSVR) This repository provides tools to implement the methods in t

Lucilio Cordero-Grande 0 Oct 29, 2021
Robust and Accurate Object Detection via Self-Knowledge Distillation

Robust and Accurate Object Detection via Self-Knowledge Distillation paper:https://arxiv.org/abs/2111.07239 Environments Python 3.7 Cuda 10.1 Prepare

Weipeng Xu 6 Jul 01, 2022
Pytorch reimplementation of PSM-Net: "Pyramid Stereo Matching Network"

This is a Pytorch Lightning version PSMNet which is based on JiaRenChang/PSMNet. use python main.py to start training. PSM-Net Pytorch reimplementatio

XIAOTIAN LIU 1 Nov 25, 2021