Network Compression via Central Filter

Overview

Network Compression via Central Filter

Environments

The code has been tested in the following environments:

  • Python 3.8
  • PyTorch 1.8.1
  • cuda 10.2
  • torchsummary, torchvision, thop

Both windows and linux are available.

Pre-trained Models

CIFAR-10:

Vgg-16 | ResNet56 | DenseNet-40 | GoogLeNet

ImageNet:

ResNet50

Running Code

The experiment is divided into two steps. We have provided the calculated data and can skip the first step.

Similarity Matrix Generation

@echo off
@rem for windows
start cmd /c ^
"cd /D [code dir]  ^
& [python.exe dir]\python.exe rank.py ^
--arch [model arch name] ^
--resume [pre-trained model dir] ^
--num_workers [worker numbers] ^
--image_num [batch numbers] ^
--batch_size [batch size] ^
--dataset [CIFAR10 or ImageNet] ^
--data_dir [data dir] ^
--calc_dis_mtx True ^
& pause"
# for linux
python rank.py \
--arch [model arch name] \
--resume [pre-trained model dir] \
--num_workers [worker numbers] \
--image_num [batch numbers] \
--batch_size [batch size] \
--dataset [CIFAR10 or ImageNet] \
--data_dir [data dir] \
--calc_dis_mtx True

Model Training

The experimental results and related configurations covered in this paper are as follows.

1. VGGNet

Architecture Compress Rate Params Flops Accuracy
VGG-16(Baseline) 14.98M(0.0%) 313.73M(0.0%) 93.96%
VGG-16 [0.3]+[0.2]*4+[0.3]*2+[0.4]+[0.85]*4 2.45M(83.6%) 124.10M(60.4%) 93.67%
VGG-16 [0.3]*5+[0.5]*3+[0.8]*4 2.18M(85.4%) 91.54M(70.8%) 93.06%
VGG-16 [0.3]*2+[0.45]*3+[0.6]*3+[0.85]*4 1.51M(89.9%) 65.92M(79.0%) 92.49%
python main_win.py \
--arch vgg_16_bn \
--resume [pre-trained model dir] \
--compress_rate [0.3]*2+[0.45]*3+[0.6]*3+[0.85]*4 \
--num_workers [worker numbers] \
--epochs 30 \
--lr 0.001 \
--lr_decay_step 5 \
--save_id 1 \
--weight_decay 0.005 \
--data_dir [dataset dir] \
--dataset CIFAR10 

2. ResNet-56

Architecture Compress Rate Params Flops Accuracy
ResNet-56(Baseline) 0.85M(0.0%) 125.49M(0.0%) 93.26%
ResNet-56 [0.]+[0.2,0.]*9+[0.3,0.]*9+[0.4,0.]*9 0.53M(37.6%) 86.11M(31.4%) 93.64%
ResNet-56 [0.]+[0.3,0.]*9+[0.4,0.]*9+[0.5,0.]*9 0.45M(47.1%) 75.7M(39.7%) 93.59%
ResNet-56 [0.]+[0.2,0.]*2+[0.6,0.]*7+[0.7,0.]*9+[0.8,0.]*9 0.19M(77.6%) 40.0M(68.1%) 92.19%
python main_win.py \
--arch resnet_56 \
--resume [pre-trained model dir] \
--compress_rate [0.]+[0.2,0.]*2+[0.6,0.]*7+[0.7,0.]*9+[0.8,0.]*9 \
--num_workers [worker numbers] \
--epochs 30 \
--lr 0.001 \
--lr_decay_step 5 \
--save_id 1 \
--weight_decay 0.005 \
--data_dir [dataset dir] \
--dataset CIFAR10 

3.DenseNet-40

Architecture Compress Rate Params Flops Accuracy
DenseNet-40(Baseline) 1.04M(0.0%) 282.00M(0.0%) 94.81%
DenseNet-40 [0.]+[0.3]*12+[0.1]+[0.3]*12+[0.1]+[0.3]*8+[0.]*4 0.67M(35.6%) 165.38M(41.4%) 94.33%
DenseNet-40 [0.]+[0.5]*12+[0.3]+[0.4]*12+[0.3]+[0.4]*9+[0.]*3 0.46M(55.8%) 109.40M(61.3%) 93.71%
# for linux
python main_win.py \
--arch densenet_40 \
--resume [pre-trained model dir] \
--compress_rate [0.]+[0.5]*12+[0.3]+[0.4]*12+[0.3]+[0.4]*9+[0.]*3 \
--num_workers [worker numbers] \
--epochs 30 \
--lr 0.001 \
--lr_decay_step 5 \
--save_id 1 \
--weight_decay 0.005 \
--data_dir [dataset dir] \
--dataset CIFAR10 

4. GoogLeNet

Architecture Compress Rate Params Flops Accuracy
GoogLeNet(Baseline) 6.15M(0.0%) 1520M(0.0%) 95.05%
GoogLeNet [0.2]+[0.7]*15+[0.8]*9+[0.,0.4,0.] 2.73M(55.6%) 0.56B(63.2%) 94.70%
GoogLeNet [0.2]+[0.9]*24+[0.,0.4,0.] 2.17M(64.7%) 0.37B(75.7%) 94.13%
python main_win.py \
--arch googlenet \
--resume [pre-trained model dir] \
--compress_rate [0.2]+[0.9]*24+[0.,0.4,0.] \
--num_workers [worker numbers] \
--epochs 1 \
--lr 0.001 \
--save_id 1 \
--weight_decay 0. \
--data_dir [dataset dir] \
--dataset CIFAR10

python main_win.py \
--arch googlenet \
--from_scratch True \
--resume finally_pruned_model/googlenet_1.pt \
--num_workers 2 \
--epochs 30 \
--lr 0.01 \
--lr_decay_step 5,15 \
--save_id 1 \
--weight_decay 0.005 \
--data_dir [dataset dir] \
--dataset CIFAR10

4. ResNet-50

Architecture Compress Rate Params Flops Top-1 Accuracy Top-5 Accuracy
ResNet-50(baseline) 25.55M(0.0%) 4.11B(0.0%) 76.15% 92.87%
ResNet-50 [0.]+[0.1,0.1,0.2]*1+[0.5,0.5,0.2]*2+[0.1,0.1,0.2]*1+[0.5,0.5,0.2]*3+[0.1,0.1,0.2]*1+[0.5,0.5,0.2]*5+[0.1,0.1,0.1]+[0.2,0.2,0.1]*2 16.08M(36.9%) 2.13B(47.9%) 75.08% 92.30%
ResNet-50 [0.]+[0.1,0.1,0.4]*1+[0.7,0.7,0.4]*2+[0.2,0.2,0.4]*1+[0.7,0.7,0.4]*3+[0.2,0.2,0.3]*1+[0.7,0.7,0.3]*5+[0.1,0.1,0.1]+[0.2,0.3,0.1]*2 13.73M(46.2%) 1.50B(63.5%) 73.43% 91.57%
ResNet-50 [0.]+[0.2,0.2,0.65]*1+[0.75,0.75,0.65]*2+[0.15,0.15,0.65]*1+[0.75,0.75,0.65]*3+[0.15,0.15,0.65]*1+[0.75,0.75,0.65]*5+[0.15,0.15,0.35]+[0.5,0.5,0.35]*2 8.10M(68.2%) 0.98B(76.2%) 70.26% 89.82%
python main_win.py \
--arch resnet_50 \
--resume [pre-trained model dir] \
--data_dir [dataset dir] \
--dataset ImageNet \
--compress_rate [0.]+[0.1,0.1,0.4]*1+[0.7,0.7,0.4]*2+[0.2,0.2,0.4]*1+[0.7,0.7,0.4]*3+[0.2,0.2,0.3]*1+[0.7,0.7,0.3]*5+[0.1,0.1,0.1]+[0.2,0.3,0.1]*2 \
--num_workers [worker numbers] \
--batch_size 64 \
--epochs 2 \
--lr_decay_step 1 \
--lr 0.001 \
--save_id 1 \
--weight_decay 0. \
--input_size 224 \
--start_cov 0

python main_win.py \
--arch resnet_50 \
--from_scratch True \
--resume finally_pruned_model/resnet_50_1.pt \
--num_workers 8 \
--epochs 40 \
--lr 0.001 \
--lr_decay_step 5,20 \
--save_id 2 \
--batch_size 64 \
--weight_decay 0.0005 \
--input_size 224 \
--data_dir [dataset dir] \
--dataset ImageNet 
RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition

RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition (PyTorch) Paper: https://arxiv.org/abs/2105.01883 Citation: @

260 Jan 03, 2023
Unified learning approach for egocentric hand gesture recognition and fingertip detection

Unified Gesture Recognition and Fingertip Detection A unified convolutional neural network (CNN) algorithm for both hand gesture recognition and finge

Mohammad 227 Dec 25, 2022
Learning Correspondence from the Cycle-consistency of Time (CVPR 2019)

TimeCycle Code for Learning Correspondence from the Cycle-consistency of Time (CVPR 2019, Oral). The code is developed based on the PyTorch framework,

Xiaolong Wang 706 Nov 29, 2022
Tensorflow2.0 🍎🍊 is delicious, just eat it! 😋😋

How to eat TensorFlow2 in 30 days ? 🔥 🔥 Click here for Chinese Version(中文版) 《10天吃掉那只pyspark》 🚀 github项目地址: https://github.com/lyhue1991/eat_pyspark

lyhue1991 9.7k Jan 01, 2023
Square Root Bundle Adjustment for Large-Scale Reconstruction

RootBA: Square Root Bundle Adjustment Project Page | Paper | Poster | Video | Code Table of Contents Citation Dependencies Installing dependencies on

Nikolaus Demmel 205 Dec 20, 2022
Request execution of Galaxy SARS-CoV-2 variation analysis workflows on input data you provide.

SARS-CoV-2 processing requests Request execution of Galaxy SARS-CoV-2 variation analysis workflows on input data you provide. Prerequisites This autom

useGalaxy.eu 17 Aug 13, 2022
Dynamic Bottleneck for Robust Self-Supervised Exploration

Dynamic Bottleneck Introduction This is a TensorFlow based implementation for our paper on "Dynamic Bottleneck for Robust Self-Supervised Exploration"

Bai Chenjia 4 Nov 14, 2022
The Environment I built to study Reinforcement Learning + Pokemon Showdown

pokemon-showdown-rl-environment The Environment I built to study Reinforcement Learning + Pokemon Showdown Been a while since I ran this. Think it is

3 Jan 16, 2022
[CVPR 2021] VirTex: Learning Visual Representations from Textual Annotations

VirTex: Learning Visual Representations from Textual Annotations Karan Desai and Justin Johnson University of Michigan CVPR 2021 arxiv.org/abs/2006.06

Karan Desai 533 Dec 24, 2022
Neural network-based build time estimation for additive manufacturing

Neural network-based build time estimation for additive manufacturing Oh, Y., Sharp, M., Sprock, T., & Kwon, S. (2021). Neural network-based build tim

Yosep 1 Nov 15, 2021
Code repo for "Transformer on a Diet" paper

Transformer on a Diet Reference: C Wang, Z Ye, A Zhang, Z Zhang, A Smola. "Transformer on a Diet". arXiv preprint arXiv (2020). Installation pip insta

cgraywang 31 Sep 26, 2021
Can we visualize a large scientific data set with a surrogate model? We're building a GAN for the Earth's Mantle Convection data set to see if we can!

EarthGAN - Earth Mantle Surrogate Modeling Can a surrogate model of the Earth’s Mantle Convection data set be built such that it can be readily run in

Tim 0 Dec 09, 2021
Streamlit app demonstrating an image browser for the Udacity self-driving-car dataset with realtime object detection using YOLO.

Streamlit Demo: The Udacity Self-driving Car Image Browser This project demonstrates the Udacity self-driving-car dataset and YOLO object detection in

Streamlit 992 Jan 04, 2023
Learning to Predict Gradients for Semi-Supervised Continual Learning

Learning to Predict Gradients for Semi-Supervised Continual Learning Code for project: "Learning to Predict Gradients for Semi-Supervised Continual Le

Yan Luo 2 Mar 05, 2022
Building blocks for uncertainty-aware cycle consistency presented at NeurIPS'21.

UncertaintyAwareCycleConsistency This repository provides the building blocks and the API for the work presented in the NeurIPS'21 paper Robustness vi

EML Tübingen 19 Dec 12, 2022
Generating Fractals on Starknet with Cairo

StarknetFractals Generating the mandelbrot set on Starknet Current Implementation generates 1 pixel of the fractal per call(). It takes a few minutes

Orland0x 10 Jul 16, 2022
Polynomial-time Meta-Interpretive Learning

Louise - polynomial-time Program Learning Getting help with Louise Louise's author can be reached by email at Stassa Patsantzis 64 Dec 26, 2022

Self-Supervised Learning of Event-based Optical Flow with Spiking Neural Networks

Self-Supervised Learning of Event-based Optical Flow with Spiking Neural Networks Work accepted at NeurIPS'21 [paper, video]. If you use this code in

TU Delft 43 Dec 07, 2022
9th place solution in "Santa 2020 - The Candy Cane Contest"

Santa 2020 - The Candy Cane Contest My solution in this Kaggle competition "Santa 2020 - The Candy Cane Contest", 9th place. Basic Strategy In this co

toshi_k 22 Nov 26, 2021
Few-NERD: Not Only a Few-shot NER Dataset

Few-NERD: Not Only a Few-shot NER Dataset This is the source code of the ACL-IJCNLP 2021 paper: Few-NERD: A Few-shot Named Entity Recognition Dataset.

THUNLP 319 Dec 30, 2022