ConformalLayers: A non-linear sequential neural network with associative layers

Overview

ConformalLayers: A non-linear sequential neural network with associative layers

ConformalLayers is a conformal embedding of sequential layers of Convolutional Neural Networks (CNNs) that allows associativity between operations like convolution, average pooling, dropout, flattening, padding, dilation, and stride. Such embedding allows associativity between layers of CNNs, considerably reducing the amount of operations to perform inference in type of neural networks.

This repository is a implementation of ConformalLayers written in Python using Minkowski Engine and PyTorch as backend. This implementation is a first step into the usage of activation functions, like ReSPro, that can be represented as tensors, depending on the geometry model.

Please cite our SIBGRAPI'21 paper if you use this code in your research. The paper presents a complete description of the library:

@InProceedings{sousa_et_al-sibgrapi-2021,
  author    = {Sousa, Eduardo V. and Fernandes, Leandro A. F. and Vasconcelos, Cristina N.},
  title     = {{C}onformal{L}ayers: a non-linear sequential neural network with associative layers},
  booktitle = {Proceedings of the 2021 34th SIBGRAPI Conference on Graphics, Patterns and Images},
  year      = {2021},
}

Please, let Eduardo Vera Sousa (http://www.ic.uff.br/~eduardovera), Leandro A. F. Fernandes (http://www.ic.uff.br/~laffernandes) and Cristina Nader Vasconcelos(http://www2.ic.uff.br/~crisnv/index.php) know if you want to contribute to this project. Also, do not hesitate to contact them if you encounter any problems.

Contents:

  1. Requirements
  2. How to Install ConformalLayers
  3. Running Examples
  4. Compiling and Running Unit Tests
  5. Documentation
  6. License

1. Requirements

Make sure that you have the following tools before attempting to use ConformalLayers.

Required tools:

Optional tool to use ConformalLayers:

  • Virtual enviroment to create an isolated workspace for a Python application.

  • Docker to create a container to run ConformalLayers

2. How to Install ConformalLayers

No magic needed here. Just run:

python setup.py install

3. Running Examples

The basic steps for running the examples of ConformalLayers look like this:

cd <ConformalLayers-dir>/Experiments/<experiment-name>

For Experiments I and II, each file refers to the experiment described on the main paper. Thus, in order to run BaseReSProNet with FashionMNIST dataset, for example, all you have to do is:

python BaseReSProNet.py --dataset=FashionMNIST

The values that can be used for the dataset argument are

  • MNIST
  • FashionMNIST
  • CIFAR10

The loader of each dataset is described in Experiments/utils/datasets.py file.

Other arguments for the script files in Experiments I and II are:

  • epochs (int value)
  • batch_size (int value)
  • learning_rate (float value)
  • optimizer (adam or rmsprop)
  • dropout (float value)

For Experiments III and IV, since we compute the amount of memory used, we used an external file to orchestrate the calls and make sure we have a clean environment for the next iterations. Such orchestrator is writen on the files with the suffix _manager.py.

You can also run the files that corresponds to each architecture individually, without the orchestrator. To run D3ModNetCL architecture, for example, just run

python D3ModNetCL.py

The arguments for the non-orchestrated scripts in Experiments III and IV are:

  • num_inferences (int value)
  • batch_size (int value)
  • depth (int value, Experiment III only)

The files in networks folder contains the description of each architecture used in our experiments and presents the usage of the classes and methods of our library.

4. Running Unit Tests

The basic steps for running the unit tests of ConformalLayers look like this:

cd <ConformalLayers-dir>/tests

To run all tests, simply run

python test_all.py

To run the tests for each module, run:

python test_<module_name>.py

5. Documentation

Here you find a brief description of the namespaces, macros, classes, functions, procedures, and operators available for the user. All methods are available with C++ and most of them with Python. The detailed documentation is not ready yet.

Contents:

Modules

Here we present the main modules implemented in our framework. Most of the modules are used just like in PyTorch, so users with some background on this framework benefits from this implementation. For users not familiar with PyTorch, the usage still quite simple and intuitive.

Module Description
Conv1d, Conv2d, Conv3d, ConvNd Convolution operation implemented for n-D signals
AvgPool1d, AvgPool2d, AvgPool3d, AvgPoolNd Average pooling operation implemented for n-D signals
BaseActivation The abstract class for the activation function layer. To extend the library, one shall implement this class
ReSPro The layer that corresponds to the ReSPro activation function. Such function is a linear function with non-linear behavior that can be encoded as a tensor. The non-linearity of this function is controlled by a parameter α (passed as argument) that can be provided or inferred from the data
Regularization In this version, Dropout is only regularization available. In this approach, during the training phase, we randomly shut down some neurons with a probability p, passed as argument to this module

These modules are composed into ConformalLayers in a very similar way to the pure PyTorch-based way. The class ConformalLayers plays an important role in this task, as you can see by comparing the code snippets below:

# This one is built with pure PyTorch
import torch.nn as nn

class D3ModNet(nn.Module):
    def __init__(self):
        super(D3ModNet, self).__init__()
        self.features = nn.Sequential(
            nn.Conv2d(in_channels=3, out_channels=32, kernel_size=3),
            nn.ReSPro(),
            nn.AvgPool2d(kernel_size=2, stride=2),
            nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3),
            nn.ReSPro(),
            nn.AvgPool2d(kernel_size=2, stride=2),
            nn.Conv2d(in_channels=32, out_channels=32, kernel_size=3),
            nn.ReSPro(),
            nn.AvgPool2d(kernel_size=2, stride=2),
        )
        self.fc1 = nn.Linear(128, 10)

    def forward(self, x):
        x = self.features(x)
        x = x.view(x.shape[0], -1)
        x = self.fc1(x)
        return x
# This one is built with ConformalLayers
import torch.nn as nn
import ConformalLayers as cl

class D3ModNetCL(nn.Module):
    def __init__(self):
        super(D3ModNetCL, self).__init__()
        self.features = cl.ConformalLayers(
            cl.Conv2d(in_channels=3, out_channels=32, kernel_size=3),
            cl.ReSPro(),
            cl.AvgPool2d(kernel_size=2, stride=2),
            cl.Conv2d(in_channels=32, out_channels=32, kernel_size=3),
            cl.ReSPro(),
            cl.AvgPool2d(kernel_size=2, stride=2),
            cl.Conv2d(in_channels=32, out_channels=32, kernel_size=3),
            cl.ReSPro(),
            cl.AvgPool2d(kernel_size=2, stride=2),
        )
        self.fc1 = nn.Linear(128, 10)

    def forward(self, x):
        x = self.features(x)
        x = x.view(x.shape[0], -1)
        x = self.fc1(x)
        return x

They look pretty much the same code, right? That's because we've implemented ConformalLayers to be a transition smoothest as possible to the PyTorch user. Most of the modules has almost the same method signatures of the ones provided by PyTorch.

Convolution

The convolution operation implemented in ConformalLayers on the modules ConvNd, Conv1d, Conv2d and Conv3d is almost the same one implemented on PyTorch but we do not allow bias. This is mostly due to the construction of our logic when building the representation with tensors. Although we have a few ideas on how to include bias on this representation, they are not included in the current version. The parameters are detailed below and are originally available in PyTorch convolution documentation page. The exception here relies on the padding_mode parameter, that is always set to 'zeros' in our implementation.

  • in_channels (int) – Number of channels in the input image

  • out_channels (int) – Number of channels produced by the convolution

  • kernel_size (int or tuple) – Size of the convolving kernel

  • stride (int or tuple, optional) – Stride of the convolution. Default: 1

  • padding (int, tuple or str, optional) – Padding added to both sides of the input. Default: 0

  • dilation (int or tuple, optional) – Spacing between kernel elements. Default: 1

  • groups (int, optional) – Number of blocked connections from input channels to output channels. Default: 1

Pooling

In our current implementation, we only support average pooling, which is implemented on modules AvgPoolNd, AvgPool1d, AvgPool2d and AvgPool3d. The parameters list, originally available in PyTorch average pooling documentation page, is described below:

  • kernel_size – the size of the window

  • stride – the stride of the window. Default value is kernel_size

  • padding – implicit zero padding to be added on both sides

  • ceil_mode – when True, will use ceil instead of floor to compute the output shape

  • count_include_pad – when True, will include the zero-padding in the averaging calculation

Activation

Our activation module has ReSPro activation function implemented natively. By using Reflections, Scalings and Projections on an hypersphere in higher dimensions, we created a non-linear differentiable associative activation function that can be represented in tensor form. It has only one parameter, that controls how close to linear or non-linear is the curve. More details are available on the main paper.

  • alpha (float, optional) - controls the non-linearity of the curve. If it is not provided, it's automatically estimated.

Regularization

On regularization module we have Dropout implemented in this version. It is based on the idea of randomly shutting down some neurons in order to prevent overfitting. It takes only two parameters, listed below. This list was originally available in PyTorch documentation page.

  • p – probability of an element to be zeroed. Default: 0.5

  • inplace – If set to True, will do this operation in-place. Default: False

6. License

This software is licensed under the GNU General Public License v3.0. See the LICENSE file for details.

You might also like...
 Improving Deep Network Debuggability via Sparse Decision Layers
Improving Deep Network Debuggability via Sparse Decision Layers

Improving Deep Network Debuggability via Sparse Decision Layers This repository contains the code for our paper: Leveraging Sparse Linear Layers for D

Accelerate Neural Net Training by Progressively Freezing Layers
Accelerate Neural Net Training by Progressively Freezing Layers

FreezeOut A simple technique to accelerate neural net training by progressively freezing layers. This repository contains code for the extended abstra

a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LSTM layers

RNN-Playwrite a reccurrent neural netowrk that when trained on a peice of text and fed a starting prompt will write its on 250 character text using LS

PyTorch Code of "Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spatiotemporal Dynamics"

Memory In Memory Networks It is based on the paper Memory In Memory: A Predictive Neural Network for Learning Higher-Order Non-Stationarity from Spati

Pytorch code for paper
Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021.

NL-CSNet-Pytorch Pytorch code for paper "Image Compressed Sensing Using Non-local Neural Network" TMM 2021. Note: this repo only shows the strategy of

This is a model made out of Neural Network specifically a Convolutional Neural Network model
This is a model made out of Neural Network specifically a Convolutional Neural Network model

This is a model made out of Neural Network specifically a Convolutional Neural Network model. This was done with a pre-built dataset from the tensorflow and keras packages. There are other alternative libraries that can be used for this purpose, one of which is the PyTorch library.

Code image classification of MNIST dataset using different architectures: simple linear NN, autoencoder, and highway network

Deep Learning for image classification pip install -r http://webia.lip6.fr/~baskiotisn/requirements-amal.txt Train an autoencoder python3 train_auto

Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks
Code for our paper at ECCV 2020: Post-Training Piecewise Linear Quantization for Deep Neural Networks

PWLQ Updates 2020/07/16 - We are working on getting permission from our institution to release our source code. We will release it once we are granted

A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks

A framework that constructs deep neural networks, autoencoders, logistic regressors, and linear networks without the use of any outside machine learning libraries - all from scratch.

Comments
  • how would you add bias?

    how would you add bias?

    the readme mentions you have a few ideas on how to do so, curious what they are. lack of bias seems to hurt performance, based on the conformallayers paper

    opened by alok 1
  • Typo when importing progress_bar in Experiment 1

    Typo when importing progress_bar in Experiment 1

    Upon trying to run Experiment I, the following error message appears.

    $ python BaseReSProNet.py --dataset=FashionMNIST
    Traceback (most recent call last):
      File "BaseReSProNet.py", line 11, in <module>
        from Experiments.utils.utils import progress_bar
    ModuleNotFoundError: No module named 'Experiments.utils.utils'
    

    I believe it is a typo; according to the source code, it should be

    from Experiments.utils.utils import progress_bar

    Environment:

    • Running the Minkowski Engine Docker built with https://github.com/NVIDIA/MinkowskiEngine/blob/master/docker/Dockerfile
    • Ubuntu 20.04
    bug 
    opened by wilderlopes 1
  • Missing setup.py

    Missing setup.py

    Hi everybody,

    Congrats on the paper! I am looking forward to reproducing it. However, I noticed the setup.py (used to install your library according to the documentation) is missing from the repo. Is there any other we could install/run it?

    enhancement 
    opened by wilderlopes 1
Releases(v1.2.1)
Owner
Prograf-UFF
Graphics Processing Research Laboratory
Prograf-UFF
An unopinionated replacement for PyTorch's Dataset and ImageFolder, that handles Tar archives

Simple Tar Dataset An unopinionated replacement for PyTorch's Dataset and ImageFolder classes, for datasets stored as uncompressed Tar archives. Just

Joao Henriques 47 Dec 20, 2022
StableSims is an open-source project aimed at simulating MakerDAO's Dai stablecoin system

StableSims is an open-source project aimed at simulating MakerDAO's Dai stablecoin system, initially used for researching optimal incentive parameters for Liquidations 2.0.

Blockchain at Berkeley 52 Nov 21, 2022
Vision Deep-Learning using Tensorflow, Keras.

Welcome! I am a computer vision deep learning developer working in Korea. This is my blog, and you can see everything I've studied here. https://www.n

kimminjun 6 Dec 14, 2022
an implementation of 3D Ken Burns Effect from a Single Image using PyTorch

3d-ken-burns This is a reference implementation of 3D Ken Burns Effect from a Single Image [1] using PyTorch. Given a single input image, it animates

Simon Niklaus 1.4k Dec 28, 2022
One line to host them all. Bootstrap your image search case in minutes.

One line to host them all. Bootstrap your image search case in minutes. Survey NOW gives the world access to customized neural image search in just on

Jina AI 403 Dec 30, 2022
PyTorch implementation of Higher Order Recurrent Space-Time Transformer

Higher Order Recurrent Space-Time Transformer (HORST) This is the official PyTorch implementation of Higher Order Recurrent Space-Time Transformer. Th

13 Oct 18, 2022
Face recognition system using MTCNN, FACENET, SVM and FAST API to track participants of Big Brother Brasil in real time.

BBB Face Recognizer Face recognition system using MTCNN, FACENET, SVM and FAST API to track participants of Big Brother Brasil in real time. Instalati

Rafael Azevedo 232 Dec 24, 2022
Semi-Supervised Graph Prototypical Networks for Hyperspectral Image Classification, IGARSS, 2021.

Semi-Supervised Graph Prototypical Networks for Hyperspectral Image Classification, IGARSS, 2021. Bobo Xi, Jiaojiao Li, Yunsong Li and Qian Du. Code f

Bobo Xi 7 Nov 03, 2022
In this project, two programs can help you take full agvantage of time on the model training with a remote server

In this project, two programs can help you take full agvantage of time on the model training with a remote server, which can push notification to your phone about the information during model trainin

GrayLee 8 Dec 27, 2022
This is an official implementation for the WTW Dataset in "Parsing Table Structures in the Wild " on table detection and table structure recognition.

WTW-Dataset This is an official implementation for the WTW Dataset in "Parsing Table Structures in the Wild " on ICCV 2021. Here, you can download the

109 Dec 29, 2022
wlad 2 Dec 19, 2022
code for "Feature Importance-aware Transferable Adversarial Attacks"

Feature Importance-aware Attack(FIA) This repository contains the code for the paper: Feature Importance-aware Transferable Adversarial Attacks (ICCV

Hengchang Guo 44 Nov 24, 2022
Kinetics-Data-Preprocessing

Kinetics-Data-Preprocessing Kinetics-400 and Kinetics-600 are common video recognition datasets used by popular video understanding projects like Slow

Kaihua Tang 7 Oct 27, 2022
This is the reference implementation for "Coresets via Bilevel Optimization for Continual Learning and Streaming"

Coresets via Bilevel Optimization This is the reference implementation for "Coresets via Bilevel Optimization for Continual Learning and Streaming" ht

Zalán Borsos 51 Dec 30, 2022
Repo for the paper "DiLBERT: Cheap Embeddings for Disease Related Medical NLP"

DiLBERT Repo for the paper "DiLBERT: Cheap Embeddings for Disease Related Medical NLP" Pretrained Model The pretrained model presented in the paper is

Kevin Roitero 2 Dec 15, 2022
A framework to train language models to learn invariant representations.

Invariant Language Modeling Implementation of the training for invariant language models. Motivation Modern pretrained language models are critical co

6 Nov 16, 2022
StyleGAN2-ADA-training-jupyter - Training custom datasets in styleGAN2-ADA by NVIDIA using Jupyter

styleGAN2-ADA-training-jupyter Training custom datasets in styleGAN2-ADA on Jupyter Official StyleGAN2-ADA by NIVIDIA Paper Training Generative Advers

Mang Su Hyun 2 Feb 24, 2022
The Instructed Glacier Model (IGM)

The Instructed Glacier Model (IGM) Overview The Instructed Glacier Model (IGM) simulates the ice dynamics, surface mass balance, and its coupling thro

27 Dec 16, 2022
This repository contains code and data for "On the Multimodal Person Verification Using Audio-Visual-Thermal Data"

trimodal_person_verification This repository contains the code, and preprocessed dataset featured in "A Study of Multimodal Person Verification Using

ISSAI 7 Aug 31, 2022
PyAF is an Open Source Python library for Automatic Time Series Forecasting built on top of popular pydata modules.

PyAF (Python Automatic Forecasting) PyAF is an Open Source Python library for Automatic Forecasting built on top of popular data science python module

CARME Antoine 405 Jan 02, 2023