Group Fisher Pruning for Practical Network Compression(ICML2021)

Overview

Group Fisher Pruning for Practical Network Compression (ICML2021)

By Liyang Liu*, Shilong Zhang*, Zhanghui Kuang, Jing-Hao Xue, Aojun Zhou, Xinjiang Wang, Yimin Chen, Wenming Yang, Qingmin Liao, Wayne Zhang

Updates

  • All one stage models of Detection has been released (21/6/2021)

NOTES

All models about detection has been released. The classification models will be released later, because we want to refactor all our code into a Hook , so that it can become a more general tool for all tasks in OpenMMLab.

We will continue to improve this method and apply it to more other tasks, such as segmentation and pose.

The layer grouping algorithm is implemtated based on the AutoGrad of Pytorch, If you are not familiar with this feature and you can read Chinese, then these materials may be helpful to you.

  1. AutoGrad in Pytorch

  2. Hook of MMCV

Introduction

1. Compare with state-of-the-arts.

2. Can be applied to various complicated structures and various tasks.

3. Boosting inference speed on GPU under same flops.

Get Started

1. Creat a basic environment with pytorch 1.3.0 and mmcv-full

Due to the frequent changes of the autograd interface, we only guarantee the code works well in pytorch==1.3.0.

  1. Creat the environment
conda create -n open-mmlab python=3.7 -y
conda activate open-mmlab
  1. Install PyTorch 1.3.0 and corresponding torchvision.
conda install pytorch=1.3.0 cudatoolkit=10.0 torchvision=0.2.2 -c pytorch
  1. Build the mmcv-full from source with pytorch 1.3.0 and cuda 10.0

Please use gcc-5.4 and nvcc 10.0

 git clone https://github.com/open-mmlab/mmcv.git
 cd mmcv
 MMCV_WITH_OPS=1 pip install -e .

2. Install the corresponding codebase in OpenMMLab.

e.g. MMdetection

pip install mmdet==2.13.0

3. Pruning the model.

e.g. Detection

cd detection

Modify the load_from as the path to the baseline model in of xxxx_pruning.py

# for slurm train
sh tools/slurm_train.sh PATITION_NAME JOB_NAME configs/retina/retina_pruning.py work_dir
# for slurm_test
sh tools/slurm_test.sh PATITION_NAME JOB_NAME configs/retina/retina_pruning.py PATH_CKPT --eval bbox
# for torch.dist
# sh tools/dist_train.sh configs/retina/retina_pruning.py 8

4. Finetune the model.

e.g. Detection

cd detection

Modify the deploy_from as the path to the pruned model in custom_hooks of xxxx_finetune.py

# for slurm train
sh tools/slurm_train.sh PATITION_NAME JOB_NAME configs/retina/retina_finetune.py work_dir
# for slurm test
sh tools/slurm_test.sh PATITION_NAME JOB_NAME configs/retina/retina_fintune.py PATH_CKPT --eval bbox
# for torch.dist
# sh tools/dist_train.sh configs/retina/retina_finetune.py 8

Models

Detection

Method Backbone Baseline(mAP) Finetuned(mAP) Download
RetinaNet R-50-FPN 36.5 36.5 Baseline/Pruned/Finetuned
ATSS* R-50-FPN 38.1 37.9 Baseline/Pruned/Finetuned
PAA* R-50-FPN 39.0 39.4 Baseline/Pruned/Finetuned
FSAF R-50-FPN 37.4 37.4 Baseline/Pruned/Finetuned

* indicate with no Group Normalization in heads.

Classification

Coming soon.

Please cite our paper in your publications if it helps your research.

@InProceedings{liu2021group,
  title = {Group Fisher Pruning for Practical Network Compression},
  author =       {Liu, Liyang and Zhang, Shilong and Kuang, Zhanghui and Zhou, Aojun and Xue, Jing-Hao and Wang, Xinjiang and Chen, Yimin and Yang, Wenming and Liao, Qingmin and Zhang, Wayne},
  booktitle = {Proceedings of the 38th International Conference on Machine Learning},
  year = {2021},
  series = {Proceedings of Machine Learning Research},
  month = {18--24 Jul},
  publisher ={PMLR},
}
Owner
Shilong Zhang
Shilong Zhang
sktime companion package for deep learning based on TensorFlow

NOTE: sktime-dl is currently being updated to work correctly with sktime 0.6, and wwill be fully relaunched over the summer. The plan is Refactor and

sktime 573 Jan 05, 2023
Fastshap: A fast, approximate shap kernel

fastshap: A fast, approximate shap kernel fastshap was designed to be: Fast Calculating shap values can take an extremely long time. fastshap utilizes

Samuel Wilson 22 Sep 24, 2022
Numerical-computing-is-fun - Learning numerical computing with notebooks for all ages.

As much as this series is to educate aspiring computer programmers and data scientists of all ages and all backgrounds, it is also a reminder to mysel

EKA foundation 758 Dec 25, 2022
Video Autoencoder: self-supervised disentanglement of 3D structure and motion

Video Autoencoder: self-supervised disentanglement of 3D structure and motion This repository contains the code (in PyTorch) for the model introduced

157 Dec 22, 2022
Framework for training options with different attention mechanism and using them to solve downstream tasks.

Using Attention in HRL Framework for training options with different attention mechanism and using them to solve downstream tasks. Requirements GPU re

5 Nov 03, 2022
EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks

EncT5 (Unofficial) Pytorch Implementation of EncT5: Fine-tuning T5 Encoder for Non-autoregressive Tasks About Finetune T5 model for classification & r

Jangwon Park 34 Jan 01, 2023
Adversarial Robustness Toolbox (ART) - Python Library for Machine Learning Security - Evasion, Poisoning, Extraction, Inference - Red and Blue Teams

Adversarial Robustness Toolbox (ART) is a Python library for Machine Learning Security. ART provides tools that enable developers and researchers to defend and evaluate Machine Learning models and ap

3.4k Jan 04, 2023
Contrastive Feature Loss for Image Prediction

Contrastive Feature Loss for Image Prediction We provide a PyTorch implementation of our contrastive feature loss presented in: Contrastive Feature Lo

Alex Andonian 44 Oct 05, 2022
MultiMix: Sparingly Supervised, Extreme Multitask Learning From Medical Images (ISBI 2021, MELBA 2021)

MultiMix This repository contains the implementation of MultiMix. Our publications for this project are listed below: "MultiMix: Sparingly Supervised,

Ayaan Haque 27 Dec 22, 2022
Image augmentation library in Python for machine learning.

Augmentor is an image augmentation library in Python for machine learning. It aims to be a standalone library that is platform and framework independe

Marcus D. Bloice 4.8k Jan 07, 2023
a basic code repository for basic task in CV(classification,detection,segmentation)

basic_cv a basic code repository for basic task in CV(classification,detection,segmentation,tracking) classification generate dataset train predict de

1 Oct 15, 2021
QuadTree Attention for Vision Transformers (ICLR2022)

This repository contains codes for quadtree attention. This repo contains codes for feature matching, image classficiation, object detection and seman

tangshitao 222 Dec 28, 2022
transfer attack; adversarial examples; black-box attack; unrestricted Adversarial Attacks on ImageNet; CVPR2021 天池黑盒竞赛

transfer_adv CVPR-2021 AIC-VI: unrestricted Adversarial Attacks on ImageNet CVPR2021 安全AI挑战者计划第六期赛道2:ImageNet无限制对抗攻击 介绍 : 深度神经网络已经在各种视觉识别问题上取得了最先进的性能。

25 Dec 08, 2022
This is a yolo3 implemented via tensorflow 2.7

YoloV3 - an object detection algorithm implemented via TF 2.x source code In this article I assume you've already familiar with basic computer vision

2 Jan 17, 2022
Information Gain Filtration (IGF) is a method for filtering domain-specific data during language model finetuning. IGF shows significant improvements over baseline fine-tuning without data filtration.

Information Gain Filtration Information Gain Filtration (IGF) is a method for filtering domain-specific data during language model finetuning. IGF sho

4 Jul 28, 2022
Code for the SIGIR 2022 paper "Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion"

MKGFormer Code for the SIGIR 2022 paper "Hybrid Transformer with Multi-level Fusion for Multimodal Knowledge Graph Completion" Model Architecture Illu

ZJUNLP 68 Dec 28, 2022
Betafold - AlphaFold with tunings

BetaFold We (hegelab.org) craeted this standalone AlphaFold (AlphaFold-Multimer,

2 Aug 11, 2022
This is a file about Unet implemented in Pytorch

Unet this is an implemetion of Unet in Pytorch and it's architecture is as follows which is the same with paper of Unet component of Unet Convolution

Dragon 1 Dec 03, 2021
Small utility to demangle Nim symbols in callgrind files

nim_callgrind A small utility to demangle Nim symbols from callgrind files. Usage Run your (Nim) program with something like this: valgrind --tool=cal

kraptor 3 Feb 15, 2022
基于Paddle框架的arcface复现

arcface-Paddle 基于Paddle框架的arcface复现 ArcFace-Paddle 本项目基于paddlepaddle框架复现ArcFace,并参加百度第三届论文复现赛,将在2021年5月15日比赛完后提供AIStudio链接~敬请期待 参考项目: InsightFace Padd

QuanHao Guo 16 Dec 15, 2022