CondenseNet: Light weighted CNN for mobile devices

Overview

CondenseNets

This repository contains the code (in PyTorch) for "CondenseNet: An Efficient DenseNet using Learned Group Convolutions" paper by Gao Huang*, Shichen Liu*, Laurens van der Maaten and Kilian Weinberger (* Authors contributed equally).

Citation

If you find our project useful in your research, please consider citing:

@inproceedings{huang2018condensenet,
  title={Condensenet: An efficient densenet using learned group convolutions},
  author={Huang, Gao and Liu, Shichen and Van der Maaten, Laurens and Weinberger, Kilian Q},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  pages={2752--2761},
  year={2018}
}

Contents

  1. Introduction
  2. Usage
  3. Results
  4. Discussions
  5. Contacts

Introduction

CondenseNet is a novel, computationally efficient convolutional network architecture. It combines dense connectivity between layers with a mechanism to remove unused connections. The dense connectivity facilitates feature re-use in the network, whereas learned group convolutions remove connections between layers for which this feature re-use is superfluous. At test time, our model can be implemented using standard grouped convolutions —- allowing for efficient computation in practice. Our experiments demonstrate that CondenseNets are much more efficient than other compact convolutional networks such as MobileNets and ShuffleNets.

Figure 1: Learned Group Convolution with G=C=3.

Figure 2: CondenseNets with Fully Dense Connectivity and Increasing Growth Rate.

Usage

Dependencies

Train

As an example, use the following command to train a CondenseNet on ImageNet

python main.py --model condensenet -b 256 -j 20 /PATH/TO/IMAGENET \
--stages 4-6-8-10-8 --growth 8-16-32-64-128 --gpu 0,1,2,3,4,5,6,7 --resume

As another example, use the following command to train a CondenseNet on CIFAR-10

python main.py --model condensenet -b 64 -j 12 cifar10 \
--stages 14-14-14 --growth 8-16-32 --gpu 0 --resume

Evaluation

We take the ImageNet model trained above as an example.

To evaluate the trained model, use evaluate to evaluate from the default checkpoint directory:

python main.py --model condensenet -b 64 -j 20 /PATH/TO/IMAGENET \
--stages 4-6-8-10-8 --growth 8-16-32-64-128 --gpu 0 --resume \
--evaluate

or use evaluate-from to evaluate from an arbitrary path:

python main.py --model condensenet -b 64 -j 20 /PATH/TO/IMAGENET \
--stages 4-6-8-10-8 --growth 8-16-32-64-128 --gpu 0 --resume \
--evaluate-from /PATH/TO/BEST/MODEL

Note that these models are still the large models. To convert the model to group-convolution version as described in the paper, use the convert-from function:

python main.py --model condensenet -b 64 -j 20 /PATH/TO/IMAGENET \
--stages 4-6-8-10-8 --growth 8-16-32-64-128 --gpu 0 --resume \
--convert-from /PATH/TO/BEST/MODEL

Finally, to directly load from a converted model (that is, a CondenseNet), use a converted model file in combination with the evaluate-from option:

python main.py --model condensenet_converted -b 64 -j 20 /PATH/TO/IMAGENET \
--stages 4-6-8-10-8 --growth 8-16-32-64-128 --gpu 0 --resume \
--evaluate-from /PATH/TO/CONVERTED/MODEL

Other Options

We also include DenseNet implementation in this repository.
For more examples of usage, please refer to script.sh
For detailed options, please python main.py --help

Results

Results on ImageNet

Model FLOPs Params Top-1 Err. Top-5 Err. Pytorch Model
CondenseNet-74 (C=G=4) 529M 4.8M 26.2 8.3 Download (18.69M)
CondenseNet-74 (C=G=8) 274M 2.9M 29.0 10.0 Download (11.68M)

Results on CIFAR

Model FLOPs Params CIFAR-10 CIFAR-100
CondenseNet-50 28.6M 0.22M 6.22 -
CondenseNet-74 51.9M 0.41M 5.28 -
CondenseNet-86 65.8M 0.52M 5.06 23.64
CondenseNet-98 81.3M 0.65M 4.83 -
CondenseNet-110 98.2M 0.79M 4.63 -
CondenseNet-122 116.7M 0.95M 4.48 -
CondenseNet-182* 513M 4.2M 3.76 18.47

(* trained 600 epochs)

Inference time on ARM platform

Model FLOPs Top-1 Time(s)
VGG-16 15,300M 28.5 354
ResNet-18 1,818M 30.2 8.14
1.0 MobileNet-224 569M 29.4 1.96
CondenseNet-74 (C=G=4) 529M 26.2 1.89
CondenseNet-74 (C=G=8) 274M 29.0 0.99

Contact

[email protected]
[email protected]

We are working on the implementation on other frameworks.
Any discussions or concerns are welcomed!

Owner
Shichen Liu
PhD student at USC
Shichen Liu
A pytorch-based deep learning framework for multi-modal 2D/3D medical image segmentation

A 3D multi-modal medical image segmentation library in PyTorch We strongly believe in open and reproducible deep learning research. Our goal is to imp

Adaloglou Nikolas 1.2k Dec 27, 2022
The MLOps platform for innovators 🚀

​ DS2.ai is an integrated AI operation solution that supports all stages from custom AI development to deployment. It is an AI-specialized platform service that collects data, builds a training datas

9 Jan 03, 2023
ML-based medical imaging using Azure

Disclaimer This code is provided for research and development use only. This code is not intended for use in clinical decision-making or for any other

Microsoft Azure 68 Dec 23, 2022
A Lightweight Face Recognition and Facial Attribute Analysis (Age, Gender, Emotion and Race) Library for Python

deepface Deepface is a lightweight face recognition and facial attribute analysis (age, gender, emotion and race) framework for python. It is a hybrid

Sefik Ilkin Serengil 5.2k Jan 02, 2023
Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning

Revitalizing CNN Attention via Transformers in Self-Supervised Visual Representation Learning

ChongjianGE 89 Dec 02, 2022
[CVPR 2019 Oral] Multi-Channel Attention Selection GAN with Cascaded Semantic Guidance for Cross-View Image Translation

SelectionGAN for Guided Image-to-Image Translation CVPR Paper | Extended Paper | Guided-I2I-Translation-Papers Citation If you use this code for your

Hao Tang 424 Dec 02, 2022
ALBERT: A Lite BERT for Self-supervised Learning of Language Representations

ALBERT ***************New March 28, 2020 *************** Add a colab tutorial to run fine-tuning for GLUE datasets. ***************New January 7, 2020

Google Research 3k Jan 01, 2023
VOGUE: Try-On by StyleGAN Interpolation Optimization

VOGUE is a StyleGAN interpolation optimization algorithm for photo-realistic try-on. Top: shirt try-on automatically synthesized by our method in two different examples.

Wei ZHANG 66 Dec 09, 2022
A curated list of neural network pruning resources.

A curated list of neural network pruning and related resources. Inspired by awesome-deep-vision, awesome-adversarial-machine-learning, awesome-deep-learning-papers and Awesome-NAS.

Yang He 1.7k Jan 09, 2023
PyTorch implementations of the beta divergence loss.

Beta Divergence Loss - PyTorch Implementation This repository contains code for a PyTorch implementation of the beta divergence loss. Dependencies Thi

Billy Carson 7 Nov 09, 2022
Seach Losses of our paper 'Loss Function Discovery for Object Detection via Convergence-Simulation Driven Search', accepted by ICLR 2021.

CSE-Autoloss Designing proper loss functions for vision tasks has been a long-standing research direction to advance the capability of existing models

Peidong Liu(刘沛东) 54 Dec 17, 2022
The codes of paper 'Active-LATHE: An Active Learning Algorithm for Boosting the Error exponent for Learning Homogeneous Ising Trees'

Active-LATHE: An Active Learning Algorithm for Boosting the Error exponent for Learning Homogeneous Ising Trees This project contains the codes of pap

0 Apr 20, 2022
A unofficial pytorch implementation of PAN(PSENet2): Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network

Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network Requirements pytorch 1.1+ torchvision 0.3+ pyclipper opencv3 gcc

zhoujun 400 Dec 26, 2022
codes for paper Combining Dynamic Local Context Focus and Dependency Cluster Attention for Aspect-level sentiment classification

DLCF-DCA codes for paper Combining Dynamic Local Context Focus and Dependency Cluster Attention for Aspect-level sentiment classification. submitted t

15 Aug 30, 2022
Learning Domain Invariant Representations in Goal-conditioned Block MDPs

Learning Domain Invariant Representations in Goal-conditioned Block MDPs Beining Han, Chongyi Zheng, Harris Chan, Keiran Paster, Michael R. Zhang, Jim

Chongyi Zheng 3 Apr 12, 2022
Python script that allows you to automatically setup your Growtopia server.

AutoSetup Python script that allows you to automatically setup your Growtopia server. How To Use Firstly, install all the required modules that used i

Aspire 3 Mar 06, 2022
EdiBERT is a generative model based on a bi-directional transformer, suited for image manipulation

EdiBERT, a generative model for image editing EdiBERT is a generative model based on a bi-directional transformer, suited for image manipulation. The

16 Dec 07, 2022
ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models (ICCV 2021 Oral)

ILVR + ADM This is the implementation of ILVR: Conditioning Method for Denoising Diffusion Probabilistic Models (ICCV 2021 Oral). This repository is h

Jooyoung Choi 225 Dec 28, 2022
A small tool to joint picture including gif

README 做设计的时候遇到拼接长图的情况,但是发现没有什么好用的能拼接gif的工具。 于是自己写了个gif拼接小工具。 可以自动拼接gif、png和jpg等常见格式。 效果 从上至下 从下至上 从左至右 从右至左 使用 克隆仓库 git clone https://github.com/Dels

3 Dec 15, 2021