Pytorch Implementation of Auto-Compressing Subset Pruning for Semantic Image Segmentation

Related tags

Deep Learningacosp
Overview

Pytorch Implementation of Auto-Compressing Subset Pruning for Semantic Image Segmentation

Introduction

ACoSP is an online pruning algorithm that compresses convolutional neural networks during training. It learns to select a subset of channels from convolutional layers through a sigmoid function, as shown in the figure. For each channel a w_i is used to scale activations.

ACoSP selection scheme.

The segmentation maps display compressed PSPNet-50 models trained on Cityscapes. The models are up to 16 times smaller.

Repository

This repository is a PyTorch implementation of ACoSP based on hszhao/semseg. It was used to run all experiments used for the publication and is meant to guarantee reproducibility and audibility of our results.

The training, test and configuration infrastructure is kept close to semseg, with only some minor modifications to enable more reproducibility and integrate our pruning code. The model/ package contains the PSPNet50 and SegNet model definitions. In acosp/ all code required to prune during training is defined.

The current configs expect a special folder structure (but can be easily adapted):

  • /data: Datasets, Pretrained-weights
  • /logs/exp: Folder to store experiments

Installation

  1. Clone the repository:

    git clone [email protected]:merantix/acosp.git
  2. Install ACoSP including requirements:

    pip install .

Using ACoSP

The implementation of ACoSP is encapsulated in /acosp and using it independent of all other experimentation code is quite straight forward.

  1. Create a pruner and adapt the model:
from acosp.pruner import SoftTopKPruner
import acosp.inject

# Create pruner object
pruner = SoftTopKPruner(
    starting_epoch=0,
    ending_epoch=100,  # Pruning duration
    final_sparsity=0.5,  # Final sparsity
)
# Add sigmoid soft k masks to model
pruner.configure_model(model)
  1. In your training loop update the temperature of all masking layers:
# Update the temperature in all masking layers
pruner.update_mask_layers(model, epoch)
  1. Convert the soft pruning to hard pruning when ending_epoch is reached:
if epoch == pruner.ending_epoch:
    # Convert to binary channel mask
    acosp.inject.soft_to_hard_k(model)

Experiments

  1. Highlight:

    • All initialization models, trained models are available. The structure is:
      | init/  # initial models
      | exp/
      |-- ade20k/  # ade20k/camvid/cityscapes/voc2012/cifar10
      | |-- pspnet50_{SPARSITY}/  # the sparsity refers to the relative amount of weights that are removed. I.e. sparsity=0.75 <==> compression_ratio=4 
      |   |-- model # model files
      |   |-- ... # config/train/test files
      |-- evals/  # all result with class wise IoU/Acc
      
  2. Hardware Requirements: At least 60GB (PSPNet50) / 16GB (SegNet) of GPU RAM. Can be distributed to multiple GPUs.

  3. Train:

    • Download related datasets and symlink the paths to them as follows (you can alternatively modify the relevant paths specified in folder config):

      mkdir -p /
      ln -s /path_to_ade20k_dataset /data/ade20k
      
    • Download ImageNet pre-trained models and put them under folder /data for weight initialization. Remember to use the right dataset format detailed in FAQ.md.

    • Specify the gpu used in config then do training. (Training using acosp have only been carried out on a single GPU. And not been tested with DDP). The general structure to access individual configs is as follows:

      sh tool/train.sh ${DATASET} ${CONFIG_NAME_WITHOUT_DATASET}

      E.g. to train a PSPNet50 on the ade20k dataset and use the config `config/ade20k/ade20k_pspnet50.yaml', execute:

      sh tool/train.sh ade20k pspnet50
  4. Test:

    • Download trained segmentation models and put them under folder specified in config or modify the specified paths.

    • For full testing (get listed performance):

      sh tool/test.sh ade20k pspnet50
  5. Visualization: tensorboardX incorporated for better visualization.

    tensorboard --logdir=/logs/exp/ade20k
  6. Other:

    • Resources: GoogleDrive LINK contains shared models, visual predictions and data lists.
    • Models: ImageNet pre-trained models and trained segmentation models can be accessed. Note that our ImageNet pretrained models are slightly different from original ResNet implementation in the beginning part.
    • Predictions: Visual predictions of several models can be accessed.
    • Datasets: attributes (names and colors) are in folder dataset and some sample lists can be accessed.
    • Some FAQs: FAQ.md.

Performance

Description: mIoU/mAcc stands for mean IoU, mean accuracy of each class and all pixel accuracy respectively. General parameters cross different datasets are listed below:

  • Network: {NETWORK} @ ACoSP-{COMPRESSION_RATIO}
  • Train Parameters: sync_bn(True), scale_min(0.5), scale_max(2.0), rotate_min(-10), rotate_max(10), zoom_factor(8), aux_weight(0.4), base_lr(1e-2), power(0.9), momentum(0.9), weight_decay(1e-4).
  • Test Parameters: ignore_label(255).
  1. ADE20K: Train Parameters: classes(150), train_h(473), train_w(473), epochs(100). Test Parameters: classes(150), test_h(473), test_w(473), base_size(512).

    • Setting: train on train (20210 images) set and test on val (2000 images) set.
    Network mIoU/mAcc
    PSPNet50 41.42/51.48
    PSPNet50 @ ACoSP-2 38.97/49.56
    PSPNet50 @ ACoSP-4 33.67/43.17
    PSPNet50 @ ACoSP-8 28.04/35.60
    PSPNet50 @ ACoSP-16 19.39/25.52
  2. PASCAL VOC 2012: Train Parameters: classes(21), train_h(473), train_w(473), epochs(50). Test Parameters: classes(21), test_h(473), test_w(473), base_size(512).

    • Setting: train on train_aug (10582 images) set and test on val (1449 images) set.
    Network mIoU/mAcc
    PSPNet50 77.30/85.27
    PSPNet50 @ ACoSP-2 72.71/81.87
    PSPNet50 @ ACoSP-4 65.84/77.12
    PSPNet50 @ ACoSP-8 58.26/69.65
    PSPNet50 @ ACoSP-16 48.06/58.83
  3. Cityscapes: Train Parameters: classes(19), train_h(713/512 -PSP/SegNet), train_h(713/1024 -PSP/SegNet), epochs(200). Test Parameters: classes(19), train_h(713/512 -PSP/SegNet), train_h(713/1024 -PSP/SegNet), base_size(2048).

    • Setting: train on fine_train (2975 images) set and test on fine_val (500 images) set.
    Network mIoU/mAcc
    PSPNet50 77.35/84.27
    PSPNet50 @ ACoSP-2 74.11/81.73
    PSPNet50 @ ACoSP-4 71.50/79.40
    PSPNet50 @ ACoSP-8 66.06/74.33
    PSPNet50 @ ACoSP-16 59.49/67.74
    SegNet 65.12/73.85
    SegNet @ ACoSP-2 64.62/73.19
    SegNet @ ACoSP-4 60.77/69.57
    SegNet @ ACoSP-8 54.34/62.48
    SegNet @ ACoSP-16 44.12/50.87
  4. CamVid: Train Parameters: classes(11), train_h(360), train_w(720), epochs(450). Test Parameters: classes(11), test_h(360), test_w(720), base_size(360).

    • Setting: train on train (367 images) set and test on test (233 images) set.
    Network mIoU/mAcc
    SegNet 55.49+-0.85/65.44+-1.01
    SegNet @ ACoSP-2 51.85+-0.83/61.86+-0.85
    SegNet @ ACoSP-4 50.10+-1.11/59.79+-1.49
    SegNet @ ACoSP-8 47.25+-1.18/56.87+-1.10
    SegNet @ ACoSP-16 42.27+-1.95/51.25+-2.02
  5. Cifar10: Train Parameters: classes(10), train_h(32), train_w(32), epochs(50). Test Parameters: classes(10), test_h(32), test_w(32), base_size(32).

    • Setting: train on train (50000 images) set and test on test (10000 images) set.
    Network mAcc
    ResNet18 89.68
    ResNet18 @ ACoSP-2 88.50
    ResNet18 @ ACoSP-4 86.21
    ResNet18 @ ACoSP-8 81.06
    ResNet18 @ ACoSP-16 76.81

Citation

If you find the acosp/ code or trained models useful, please consider citing:

For the general training code, please also consider referencing hszhao/semseg.

Question

Some FAQ.md collected. You are welcome to send pull requests or give some advices. Contact information: at.

Owner
Merantix
Merantix
AITom is an open-source platform for AI driven cellular electron cryo-tomography analysis.

AITom Introduction AITom is an open-source platform for AI driven cellular electron cryo-tomography analysis. AITom is originated from the tomominer l

93 Jan 02, 2023
Official implementation of "Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection" in CVPR 2022.

Jadena Official implementation of "Can You Spot the Chameleon? Adversarially Camouflaging Images from Co-Salient Object Detection" in CVPR 2022. arXiv

Qing Guo 13 Nov 29, 2022
VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation

VID-Fusion VID-Fusion: Robust Visual-Inertial-Dynamics Odometry for Accurate External Force Estimation Authors: Ziming Ding , Tiankai Yang, Kunyi Zhan

ZJU FAST Lab 86 Nov 18, 2022
Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".

Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".

66 Dec 15, 2022
LSTM-VAE Implementation and Relevant Evaluations

LSTM-VAE Implementation and Relevant Evaluations Before using any file in this repository, please create two directories under the root directory name

Lan Zhang 5 Oct 08, 2022
Deep Learning and Reinforcement Learning Library for Scientists and Engineers 🔥

TensorLayer is a novel TensorFlow-based deep learning and reinforcement learning library designed for researchers and engineers. It provides an extens

TensorLayer Community 7.1k Dec 27, 2022
Video Representation Learning by Recognizing Temporal Transformations. In ECCV, 2020.

Video Representation Learning by Recognizing Temporal Transformations [Project Page] Simon Jenni, Givi Meishvili, and Paolo Favaro. In ECCV, 2020. Thi

Simon Jenni 46 Nov 14, 2022
Code samples for my book "Neural Networks and Deep Learning"

Code samples for "Neural Networks and Deep Learning" This repository contains code samples for my book on "Neural Networks and Deep Learning". The cod

Michael Nielsen 13.9k Dec 26, 2022
Code for classifying international patents based on the text of their titles/abstracts

Patent Classification Goal: To train a machine learning classifier that can automatically classify international patents downloaded from the WIPO webs

Prashanth Rao 1 Nov 08, 2022
[EMNLP 2021] Distantly-Supervised Named Entity Recognition with Noise-Robust Learning and Language Model Augmented Self-Training

RoSTER The source code used for Distantly-Supervised Named Entity Recognition with Noise-Robust Learning and Language Model Augmented Self-Training, p

Yu Meng 60 Dec 30, 2022
Deep learning models for change detection of remote sensing images

Change Detection Models (Remote Sensing) Python library with Neural Networks for Change Detection based on PyTorch. âš¡ âš¡ âš¡ I am trying to build this pr

Kaiyu Li 176 Dec 24, 2022
Code for the paper One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation, CVPR 2021.

One Thing One Click One Thing One Click: A Self-Training Approach for Weakly Supervised 3D Semantic Segmentation (CVPR2021) Code for the paper One Thi

44 Dec 12, 2022
Paper: De-rendering Stylized Texts

Paper: De-rendering Stylized Texts Wataru Shimoda1, Daichi Haraguchi2, Seiichi Uchida2, Kota Yamaguchi1 1CyberAgent.Inc, 2 Kyushu University Accepted

CyberAgent AI Lab 55 Dec 18, 2022
Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)

Channel Pruning for Accelerating Very Deep Neural Networks (ICCV'17)

Yihui He 1k Jan 03, 2023
A python package to perform same transformation to coco-annotation as performed on the image.

coco-transform-util A python package to perform same transformation to coco-annotation as performed on the image. Installation Way 1 $ git clone https

1 Jan 14, 2022
3D2Unet: 3D Deformable Unet for Low-Light Video Enhancement (PRCV2021)

3DDUNET This is the code for 3D2Unet: 3D Deformable Unet for Low-Light Video Enhancement (PRCV2021) Conference Paper Link Dataset We use SMOID dataset

1 Jan 07, 2022
Official repository of "DeepMIH: Deep Invertible Network for Multiple Image Hiding", TPAMI 2022.

DeepMIH: Deep Invertible Network for Multiple Image Hiding (TPAMI 2022) This repo is the official code for DeepMIH: Deep Invertible Network for Multip

Junpeng Jing 67 Nov 22, 2022
ThunderGBM: Fast GBDTs and Random Forests on GPUs

Documentations | Installation | Parameters | Python (scikit-learn) interface What's new? ThunderGBM won 2019 Best Paper Award from IEEE Transactions o

Xtra Computing Group 647 Jan 04, 2023
Users can free try their models on SIDD dataset based on this code

SIDD benchmark 1 Train python train.py If you want to train your network, just modify the yaml in the options folder. 2 Validation python validation.p

Yuzhi ZHAO 2 May 20, 2022
Open source repository for the code accompanying the paper 'PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations'.

PatchNets This is the official repository for the project "PatchNets: Patch-Based Generalizable Deep Implicit 3D Shape Representations". For details,

16 May 22, 2022