Network Compression via Central Filter

Overview

Network Compression via Central Filter

Environments

The code has been tested in the following environments:

  • Python 3.8
  • PyTorch 1.8.1
  • cuda 10.2
  • torchsummary, torchvision, thop

Both windows and linux are available.

Pre-trained Models

CIFAR-10:

Vgg-16 | ResNet56 | DenseNet-40 | GoogLeNet

ImageNet:

ResNet50

Running Code

The experiment is divided into two steps. We have provided the calculated data and can skip the first step.

Similarity Matrix Generation

@echo off
@rem for windows
start cmd /c ^
"cd /D [code dir]  ^
& [python.exe dir]\python.exe rank.py ^
--arch [model arch name] ^
--resume [pre-trained model dir] ^
--num_workers [worker numbers] ^
--image_num [batch numbers] ^
--batch_size [batch size] ^
--dataset [CIFAR10 or ImageNet] ^
--data_dir [data dir] ^
--calc_dis_mtx True ^
& pause"
# for linux
python rank.py \
--arch [model arch name] \
--resume [pre-trained model dir] \
--num_workers [worker numbers] \
--image_num [batch numbers] \
--batch_size [batch size] \
--dataset [CIFAR10 or ImageNet] \
--data_dir [data dir] \
--calc_dis_mtx True

Model Training

The experimental results and related configurations covered in this paper are as follows.

1. VGGNet

Architecture Compress Rate Params Flops Accuracy
VGG-16(Baseline) 14.98M(0.0%) 313.73M(0.0%) 93.96%
VGG-16 [0.3]+[0.2]*4+[0.3]*2+[0.4]+[0.85]*4 2.45M(83.6%) 124.10M(60.4%) 93.67%
VGG-16 [0.3]*5+[0.5]*3+[0.8]*4 2.18M(85.4%) 91.54M(70.8%) 93.06%
VGG-16 [0.3]*2+[0.45]*3+[0.6]*3+[0.85]*4 1.51M(89.9%) 65.92M(79.0%) 92.49%
python main_win.py \
--arch vgg_16_bn \
--resume [pre-trained model dir] \
--compress_rate [0.3]*2+[0.45]*3+[0.6]*3+[0.85]*4 \
--num_workers [worker numbers] \
--epochs 30 \
--lr 0.001 \
--lr_decay_step 5 \
--save_id 1 \
--weight_decay 0.005 \
--data_dir [dataset dir] \
--dataset CIFAR10 

2. ResNet-56

Architecture Compress Rate Params Flops Accuracy
ResNet-56(Baseline) 0.85M(0.0%) 125.49M(0.0%) 93.26%
ResNet-56 [0.]+[0.2,0.]*9+[0.3,0.]*9+[0.4,0.]*9 0.53M(37.6%) 86.11M(31.4%) 93.64%
ResNet-56 [0.]+[0.3,0.]*9+[0.4,0.]*9+[0.5,0.]*9 0.45M(47.1%) 75.7M(39.7%) 93.59%
ResNet-56 [0.]+[0.2,0.]*2+[0.6,0.]*7+[0.7,0.]*9+[0.8,0.]*9 0.19M(77.6%) 40.0M(68.1%) 92.19%
python main_win.py \
--arch resnet_56 \
--resume [pre-trained model dir] \
--compress_rate [0.]+[0.2,0.]*2+[0.6,0.]*7+[0.7,0.]*9+[0.8,0.]*9 \
--num_workers [worker numbers] \
--epochs 30 \
--lr 0.001 \
--lr_decay_step 5 \
--save_id 1 \
--weight_decay 0.005 \
--data_dir [dataset dir] \
--dataset CIFAR10 

3.DenseNet-40

Architecture Compress Rate Params Flops Accuracy
DenseNet-40(Baseline) 1.04M(0.0%) 282.00M(0.0%) 94.81%
DenseNet-40 [0.]+[0.3]*12+[0.1]+[0.3]*12+[0.1]+[0.3]*8+[0.]*4 0.67M(35.6%) 165.38M(41.4%) 94.33%
DenseNet-40 [0.]+[0.5]*12+[0.3]+[0.4]*12+[0.3]+[0.4]*9+[0.]*3 0.46M(55.8%) 109.40M(61.3%) 93.71%
# for linux
python main_win.py \
--arch densenet_40 \
--resume [pre-trained model dir] \
--compress_rate [0.]+[0.5]*12+[0.3]+[0.4]*12+[0.3]+[0.4]*9+[0.]*3 \
--num_workers [worker numbers] \
--epochs 30 \
--lr 0.001 \
--lr_decay_step 5 \
--save_id 1 \
--weight_decay 0.005 \
--data_dir [dataset dir] \
--dataset CIFAR10 

4. GoogLeNet

Architecture Compress Rate Params Flops Accuracy
GoogLeNet(Baseline) 6.15M(0.0%) 1520M(0.0%) 95.05%
GoogLeNet [0.2]+[0.7]*15+[0.8]*9+[0.,0.4,0.] 2.73M(55.6%) 0.56B(63.2%) 94.70%
GoogLeNet [0.2]+[0.9]*24+[0.,0.4,0.] 2.17M(64.7%) 0.37B(75.7%) 94.13%
python main_win.py \
--arch googlenet \
--resume [pre-trained model dir] \
--compress_rate [0.2]+[0.9]*24+[0.,0.4,0.] \
--num_workers [worker numbers] \
--epochs 1 \
--lr 0.001 \
--save_id 1 \
--weight_decay 0. \
--data_dir [dataset dir] \
--dataset CIFAR10

python main_win.py \
--arch googlenet \
--from_scratch True \
--resume finally_pruned_model/googlenet_1.pt \
--num_workers 2 \
--epochs 30 \
--lr 0.01 \
--lr_decay_step 5,15 \
--save_id 1 \
--weight_decay 0.005 \
--data_dir [dataset dir] \
--dataset CIFAR10

4. ResNet-50

Architecture Compress Rate Params Flops Top-1 Accuracy Top-5 Accuracy
ResNet-50(baseline) 25.55M(0.0%) 4.11B(0.0%) 76.15% 92.87%
ResNet-50 [0.]+[0.1,0.1,0.2]*1+[0.5,0.5,0.2]*2+[0.1,0.1,0.2]*1+[0.5,0.5,0.2]*3+[0.1,0.1,0.2]*1+[0.5,0.5,0.2]*5+[0.1,0.1,0.1]+[0.2,0.2,0.1]*2 16.08M(36.9%) 2.13B(47.9%) 75.08% 92.30%
ResNet-50 [0.]+[0.1,0.1,0.4]*1+[0.7,0.7,0.4]*2+[0.2,0.2,0.4]*1+[0.7,0.7,0.4]*3+[0.2,0.2,0.3]*1+[0.7,0.7,0.3]*5+[0.1,0.1,0.1]+[0.2,0.3,0.1]*2 13.73M(46.2%) 1.50B(63.5%) 73.43% 91.57%
ResNet-50 [0.]+[0.2,0.2,0.65]*1+[0.75,0.75,0.65]*2+[0.15,0.15,0.65]*1+[0.75,0.75,0.65]*3+[0.15,0.15,0.65]*1+[0.75,0.75,0.65]*5+[0.15,0.15,0.35]+[0.5,0.5,0.35]*2 8.10M(68.2%) 0.98B(76.2%) 70.26% 89.82%
python main_win.py \
--arch resnet_50 \
--resume [pre-trained model dir] \
--data_dir [dataset dir] \
--dataset ImageNet \
--compress_rate [0.]+[0.1,0.1,0.4]*1+[0.7,0.7,0.4]*2+[0.2,0.2,0.4]*1+[0.7,0.7,0.4]*3+[0.2,0.2,0.3]*1+[0.7,0.7,0.3]*5+[0.1,0.1,0.1]+[0.2,0.3,0.1]*2 \
--num_workers [worker numbers] \
--batch_size 64 \
--epochs 2 \
--lr_decay_step 1 \
--lr 0.001 \
--save_id 1 \
--weight_decay 0. \
--input_size 224 \
--start_cov 0

python main_win.py \
--arch resnet_50 \
--from_scratch True \
--resume finally_pruned_model/resnet_50_1.pt \
--num_workers 8 \
--epochs 40 \
--lr 0.001 \
--lr_decay_step 5,20 \
--save_id 2 \
--batch_size 64 \
--weight_decay 0.0005 \
--input_size 224 \
--data_dir [dataset dir] \
--dataset ImageNet 
Pytorch Lightning code guideline for conferences

Deep learning project seed Use this seed to start new deep learning / ML projects. Built in setup.py Built in requirements Examples with MNIST Badges

Pytorch Lightning 1k Jan 02, 2023
[CVPR 2021] Released code for Counterfactual Zero-Shot and Open-Set Visual Recognition

Counterfactual Zero-Shot and Open-Set Visual Recognition This project provides implementations for our CVPR 2021 paper Counterfactual Zero-S

144 Dec 24, 2022
Automatically creates genre collections for your Plex media

Plex Auto Genres Plex Auto Genres is a simple script that will add genre collection tags to your media making it much easier to search for genre speci

Shane Israel 63 Dec 31, 2022
The official implementation of the Interspeech 2021 paper WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution.

WSRGlow The official implementation of the Interspeech 2021 paper WSRGlow: A Glow-based Waveform Generative Model for Audio Super-Resolution. Audio sa

Kexun Zhang 96 Jan 03, 2023
Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic Data - Official PyTorch Implementation (CVPR 2022)

Commonality in Natural Images Rescues GANs: Pretraining GANs with Generic and Privacy-free Synthetic Data (CVPR 2022) Potentials of primitive shapes f

31 Sep 27, 2022
Train an RL agent to execute natural language instructions in a 3D Environment (PyTorch)

Gated-Attention Architectures for Task-Oriented Language Grounding This is a PyTorch implementation of the AAAI-18 paper: Gated-Attention Architecture

Devendra Chaplot 234 Nov 05, 2022
Self-Supervised Document-to-Document Similarity Ranking via Contextualized Language Models and Hierarchical Inference

Self-Supervised Document Similarity Ranking (SDR) via Contextualized Language Models and Hierarchical Inference This repo is the implementation for SD

Microsoft 36 Nov 28, 2022
StyleGAN2-ADA - Official PyTorch implementation

Abstract: Training generative adversarial networks (GAN) using too little data typically leads to discriminator overfitting, causing training to diverge. We propose an adaptive discriminator augmenta

NVIDIA Research Projects 3.2k Dec 30, 2022
Code for Graph-to-Tree Learning for Solving Math Word Problems (ACL 2020)

Graph-to-Tree Learning for Solving Math Word Problems PyTorch implementation of Graph based Math Word Problem solver described in our ACL 2020 paper G

Jipeng Zhang 66 Nov 23, 2022
SeqTR: A Simple yet Universal Network for Visual Grounding

SeqTR This is the official implementation of SeqTR: A Simple yet Universal Network for Visual Grounding, which simplifies and unifies the modelling fo

seanZhuh 76 Dec 24, 2022
Geometric Deep Learning Extension Library for PyTorch

Documentation | Paper | Colab Notebooks | External Resources | OGB Examples PyTorch Geometric (PyG) is a geometric deep learning extension library for

Matthias Fey 16.5k Jan 08, 2023
You Only Look One-level Feature (YOLOF), CVPR2021, Detectron2

You Only Look One-level Feature (YOLOF), CVPR2021 A simple, fast, and efficient object detector without FPN. This repo provides a neat implementation

qiang chen 273 Jan 03, 2023
LLVIP: A Visible-infrared Paired Dataset for Low-light Vision

LLVIP: A Visible-infrared Paired Dataset for Low-light Vision Project | Arxiv | Abstract It is very challenging for various visual tasks such as image

CVSM Group - email: <a href=[email protected]"> 377 Jan 07, 2023
Reference implementation of code generation projects from Facebook AI Research. General toolkit to apply machine learning to code, from dataset creation to model training and evaluation. Comes with pretrained models.

This repository is a toolkit to do machine learning for programming languages. It implements tokenization, dataset preprocessing, model training and m

Facebook Research 408 Jan 01, 2023
This is the pytorch implementation for the paper: *Learning Accurate Performance Predictors for Ultrafast Automated Model Compression*, which is in submission to TPAMI

SeerNet This is the pytorch implementation for the paper: Learning Accurate Performance Predictors for Ultrafast Automated Model Compression, which is

3 May 01, 2022
Detectron2-FC a fast construction platform of neural network algorithm based on detectron2

What is Detectron2-FC Detectron2-FC a fast construction platform of neural network algorithm based on detectron2. We have been working hard in two dir

董晋宗 9 Jun 06, 2022
Official implementation of "Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks", NeurIPS 2021.

PHDimGeneralization Official implementation of "Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks", NeurIPS 2021. Overvie

Tolga Birdal 13 Nov 08, 2022
Catbird is an open source paraphrase generation toolkit based on PyTorch.

Catbird is an open source paraphrase generation toolkit based on PyTorch. Quick Start Requirements and Installation The project is based on PyTorch 1.

Afonso Salgado de Sousa 5 Dec 15, 2022
Example how to deploy deep learning model with aiohttp.

aiohttp-demos Demos for aiohttp project. Contents Imagetagger Deep Learning Image Classifier URL shortener Toxic Comments Classifier Moderator Slack B

aio-libs 661 Jan 04, 2023
LSTM Neural Networks for Spectroscopic Studies of Type Ia Supernovae

Package Description The difficulties in acquiring spectroscopic data have been a major challenge for supernova surveys. snlstm is developed to provide

7 Oct 11, 2022