PyTorch framework for Deep Learning research and development.

Overview

Catalyst logo

Accelerated DL & RL

Build Status CodeFactor Pipi version Docs PyPI Status

Twitter Telegram Slack Github contributors

PyTorch framework for Deep Learning research and development. It was developed with a focus on reproducibility, fast experimentation and code/ideas reusing. Being able to research/develop something new, rather than write another regular train loop.
Break the cycle - use the Catalyst!

Project manifest. Part of PyTorch Ecosystem. Part of Catalyst Ecosystem:

  • Alchemy - Experiments logging & visualization
  • Catalyst - Accelerated Deep Learning Research and Development
  • Reaction - Convenient Deep Learning models serving

Catalyst at AI Landscape.


Catalyst.Segmentation Build Status Github contributors

Note: this repo uses advanced Catalyst Config API and could be a bit out-of-day right now. Use Catalyst's minimal examples section for a starting point and up-to-day use cases, please.

You will learn how to build image segmentation pipeline with transfer learning using the Catalyst framework.

Goals

  1. Install requirements
  2. Prepare data
  3. Run: raw data → production-ready model
  4. Get results
  5. Customize own pipeline

1. Install requirements

Using local environment:

pip install -r requirements/requirements.txt

Using docker:

This creates a build catalyst-segmentation with the necessary libraries:

make docker-build

2. Get Dataset

Try on open datasets

You can use one of the open datasets

/dev/null mv isbi_cleared_191107 ./data/origin elif [[ "$DATASET" == "voc2012" ]]; then # semantic segmentation # http://host.robots.ox.ac.uk/pascal/VOC/voc2012/ wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar tar -xf VOCtrainval_11-May-2012.tar &>/dev/null mkdir -p ./data/origin/images/; mv VOCdevkit/VOC2012/JPEGImages/* $_ mkdir -p ./data/origin/raw_masks; mv VOCdevkit/VOC2012/SegmentationClass/* $_ fi ">
export DATASET="isbi"

rm -rf data/
mkdir -p data

if [[ "$DATASET" == "isbi" ]]; then
    # binary segmentation
    # http://brainiac2.mit.edu/isbi_challenge/
    download-gdrive 1uyPb9WI0t2qMKIqOjFKMv1EtfQ5FAVEI isbi_cleared_191107.tar.gz
    tar -xf isbi_cleared_191107.tar.gz &>/dev/null
    mv isbi_cleared_191107 ./data/origin
elif [[ "$DATASET" == "voc2012" ]]; then
    # semantic segmentation
    # http://host.robots.ox.ac.uk/pascal/VOC/voc2012/
    wget http://host.robots.ox.ac.uk/pascal/VOC/voc2012/VOCtrainval_11-May-2012.tar
    tar -xf VOCtrainval_11-May-2012.tar &>/dev/null
    mkdir -p ./data/origin/images/; mv VOCdevkit/VOC2012/JPEGImages/* $_
    mkdir -p ./data/origin/raw_masks; mv VOCdevkit/VOC2012/SegmentationClass/* $_
fi

Use your own dataset

Prepare your dataset

Data structure

Make sure, that final folder with data has the required structure:

/path/to/your_dataset/
        images/
            image_1
            image_2
            ...
            image_N
        raw_masks/
            mask_1
            mask_2
            ...
            mask_N

Data location

  • The easiest way is to move your data:

    mv /path/to/your_dataset/* /catalyst.segmentation/data/origin

    In that way you can run pipeline with default settings.

  • If you prefer leave data in /path/to/your_dataset/

    • In local environment:

      • Link directory
        ln -s /path/to/your_dataset $(pwd)/data/origin
      • Or just set path to your dataset DATADIR=/path/to/your_dataset when you start the pipeline.
    • Using docker

      You need to set:

         -v /path/to/your_dataset:/data \ #instead default  $(pwd)/data/origin:/data

      in the script below to start the pipeline.

3. Segmentation pipeline

Fast&Furious: raw data → production-ready model

The pipeline will automatically guide you from raw data to the production-ready model.

We will initialize Unet model with a pre-trained ResNet-18 encoder. During current pipeline model will be trained sequentially in two stages.

Binary segmentation pipeline

Run in local environment:

CUDA_VISIBLE_DEVICES=0 \
CUDNN_BENCHMARK="True" \
CUDNN_DETERMINISTIC="True" \
WORKDIR=./logs \
DATADIR=./data/origin \
IMAGE_SIZE=256 \
CONFIG_TEMPLATE=./configs/templates/binary.yml \
NUM_WORKERS=4 \
BATCH_SIZE=256 \
bash ./bin/catalyst-binary-segmentation-pipeline.sh

Run in docker:

export LOGDIR=$(pwd)/logs
docker run -it --rm --shm-size 8G --runtime=nvidia \
   -v $(pwd):/workspace/ \
   -v $LOGDIR:/logdir/ \
   -v $(pwd)/data/origin:/data \
   -e "CUDA_VISIBLE_DEVICES=0" \
   -e "USE_WANDB=1" \
   -e "LOGDIR=/logdir" \
   -e "CUDNN_BENCHMARK='True'" \
   -e "CUDNN_DETERMINISTIC='True'" \
   -e "WORKDIR=/logdir" \
   -e "DATADIR=/data" \
   -e "IMAGE_SIZE=256" \
   -e "CONFIG_TEMPLATE=./configs/templates/binary.yml" \
   -e "NUM_WORKERS=4" \
   -e "BATCH_SIZE=256" \
   catalyst-segmentation ./bin/catalyst-binary-segmentation-pipeline.sh

Semantic segmentation pipeline

Run in local environment:

CUDA_VISIBLE_DEVICES=0 \
CUDNN_BENCHMARK="True" \
CUDNN_DETERMINISTIC="True" \
WORKDIR=./logs \
DATADIR=./data/origin \
IMAGE_SIZE=256 \
CONFIG_TEMPLATE=./configs/templates/semantic.yml \
NUM_WORKERS=4 \
BATCH_SIZE=256 \
bash ./bin/catalyst-semantic-segmentation-pipeline.sh

Run in docker:

export LOGDIR=$(pwd)/logs
docker run -it --rm --shm-size 8G --runtime=nvidia \
   -v $(pwd):/workspace/ \
   -v $LOGDIR:/logdir/ \
   -v $(pwd)/data/origin:/data \
   -e "CUDA_VISIBLE_DEVICES=0" \
   -e "USE_WANDB=1" \
   -e "LOGDIR=/logdir" \
   -e "CUDNN_BENCHMARK='True'" \
   -e "CUDNN_DETERMINISTIC='True'" \
   -e "WORKDIR=/logdir" \
   -e "DATADIR=/data" \
   -e "IMAGE_SIZE=256" \
   -e "CONFIG_TEMPLATE=./configs/templates/semantic.yml" \
   -e "NUM_WORKERS=4" \
   -e "BATCH_SIZE=256" \
   catalyst-segmentation ./bin/catalyst-semantic-segmentation-pipeline.sh

The pipeline is running and you don’t have to do anything else, it remains to wait for the best model!

Visualizations

You can use W&B account for visualisation right after pip install wandb:

wandb: (1) Create a W&B account
wandb: (2) Use an existing W&B account
wandb: (3) Don't visualize my results

Tensorboard also can be used for visualisation:

tensorboard --logdir=/catalyst.segmentation/logs

4. Results

All results of all experiments can be found locally in WORKDIR, by default catalyst.segmentation/logs. Results of experiment, for instance catalyst.segmentation/logs/logdir-191107-094627-2f31d790, contain:

checkpoints

  • The directory contains all checkpoints: best, last, also of all stages.
  • best.pth and last.pht can be also found in the corresponding experiment in your W&B account.

configs

  • The directory contains experiment`s configs for reproducibility.

logs

  • The directory contains all logs of experiment.
  • Metrics also logs can be displayed in the corresponding experiment in your W&B account.

code

  • The directory contains code on which calculations were performed. This is necessary for complete reproducibility.

5. Customize own pipeline

For your future experiments framework provides powerful configs allow to optimize configuration of the whole pipeline of segmentation in a controlled and reproducible way.

Configure your experiments

  • Common settings of stages of training and model parameters can be found in catalyst.segmentation/configs/_common.yml.

    • model_params: detailed configuration of models, including:
      • model, for instance ResnetUnet
      • detailed architecture description
      • using pretrained model
    • stages: you can configure training or inference in several stages with different hyperparameters. In our example:
      • optimizer params
      • first learn the head(s), then train the whole network
  • The CONFIG_TEMPLATE with other experiment`s hyperparameters, such as data_params and is here: catalyst.segmentation/configs/templates/binary.yml. The config allows you to define:

    • data_params: path, batch size, num of workers and so on
    • callbacks_params: Callbacks are used to execute code during training, for example, to get metrics or save checkpoints. Catalyst provide wide variety of helpful callbacks also you can use custom.

You can find much more options for configuring experiments in catalyst documentation.

Code for the paper: On Pathologies in KL-Regularized Reinforcement Learning from Expert Demonstrations

Non-Parametric Prior Actor-Critic (N-PPAC) This repository contains the code for On Pathologies in KL-Regularized Reinforcement Learning from Expert D

Cong Lu 5 May 13, 2022
SoK: Vehicle Orientation Representations for Deep Rotation Estimation

SoK: Vehicle Orientation Representations for Deep Rotation Estimation Raymond H. Tu, Siyuan Peng, Valdimir Leung, Richard Gao, Jerry Lan This is the o

FIRE Capital One Machine Learning of the University of Maryland 12 Oct 07, 2022
A TensorFlow implementation of Neural Program Synthesis from Diverse Demonstration Videos

ViZDoom http://vizdoom.cs.put.edu.pl ViZDoom allows developing AI bots that play Doom using only the visual information (the screen buffer). It is pri

Hyeonwoo Noh 1 Aug 19, 2020
Video Autoencoder: self-supervised disentanglement of 3D structure and motion

Video Autoencoder: self-supervised disentanglement of 3D structure and motion This repository contains the code (in PyTorch) for the model introduced

157 Dec 22, 2022
Share a benchmark that can easily apply reinforcement learning in Job-shop-scheduling

Gymjsp Gymjsp is an open source Python library, which uses the OpenAI Gym interface for easily instantiating and interacting with RL environments, and

134 Dec 08, 2022
This repository contains several jupyter notebooks to help users learn to use neon, our deep learning framework

neon_course This repository contains several jupyter notebooks to help users learn to use neon, our deep learning framework. For more information, see

Nervana 92 Jan 03, 2023
Wide Residual Networks (WideResNets) in PyTorch

Wide Residual Networks (WideResNets) in PyTorch WideResNets for CIFAR10/100 implemented in PyTorch. This implementation requires less GPU memory than

Jason Kuen 296 Dec 27, 2022
A foreign language learning aid using a neural network to predict probability of translating foreign words

Langy Langy is a reading-focused foreign language learning aid orientated towards young children. Reading is an activity that every child knows. It is

Shona Lowden 6 Nov 17, 2021
Code for the Interspeech 2021 paper "AST: Audio Spectrogram Transformer".

AST: Audio Spectrogram Transformer Introduction Citing Getting Started ESC-50 Recipe Speechcommands Recipe AudioSet Recipe Pretrained Models Contact I

Yuan Gong 603 Jan 07, 2023
Official codebase for "B-Pref: Benchmarking Preference-BasedReinforcement Learning" contains scripts to reproduce experiments.

B-Pref Official codebase for B-Pref: Benchmarking Preference-BasedReinforcement Learning contains scripts to reproduce experiments. Install conda env

48 Dec 20, 2022
Code for the paper Progressive Pose Attention for Person Image Generation in CVPR19 (Oral).

Pose-Transfer Code for the paper Progressive Pose Attention for Person Image Generation in CVPR19(Oral). The paper is available here. Video generation

Tengteng Huang 679 Jan 04, 2023
Spline is a tool that is capable of running locally as well as part of well known pipelines like Jenkins (Jenkinsfile), Travis CI (.travis.yml) or similar ones.

Welcome to spline - the pipeline tool Important note: Since change in my job I didn't had the chance to continue on this project. My main new project

Thomas Lehmann 29 Aug 22, 2022
An implementation of EWC with PyTorch

EWC.pytorch An implementation of Elastic Weight Consolidation (EWC), proposed in James Kirkpatrick et al. Overcoming catastrophic forgetting in neural

Ryuichiro Hataya 166 Dec 22, 2022
Adjusting for Autocorrelated Errors in Neural Networks for Time Series

Adjusting for Autocorrelated Errors in Neural Networks for Time Series This repository is the official implementation of the paper "Adjusting for Auto

Fan-Keng Sun 51 Nov 05, 2022
Code for paper Adaptively Aligned Image Captioning via Adaptive Attention Time

Adaptively Aligned Image Captioning via Adaptive Attention Time This repository includes the implementation for Adaptively Aligned Image Captioning vi

Lun Huang 45 Aug 27, 2022
State-of-the-art data augmentation search algorithms in PyTorch

MuarAugment Description MuarAugment is a package providing the easiest way to a state-of-the-art data augmentation pipeline. How to use You can instal

43 Dec 12, 2022
A library for finding knowledge neurons in pretrained transformer models.

knowledge-neurons An open source repository replicating the 2021 paper Knowledge Neurons in Pretrained Transformers by Dai et al., and extending the t

EleutherAI 96 Dec 21, 2022
HAR-stacked-residual-bidir-LSTMs - Deep stacked residual bidirectional LSTMs for HAR

HAR-stacked-residual-bidir-LSTM The project is based on this repository which is presented as a tutorial. It consists of Human Activity Recognition (H

Guillaume Chevalier 287 Dec 27, 2022
deep-prae

Deep Probabilistic Accelerated Evaluation (Deep-PrAE) Our work presents an efficient rare event simulation methodology for black box autonomy using Im

Safe AI Lab 4 Apr 17, 2021
Code for the KDD 2021 paper 'Filtration Curves for Graph Representation'

Filtration Curves for Graph Representation This repository provides the code from the KDD'21 paper Filtration Curves for Graph Representation. Depende

Machine Learning and Computational Biology Lab 16 Oct 16, 2022