Densely Connected Convolutional Networks, In CVPR 2017 (Best Paper Award).

Overview

Densely Connected Convolutional Networks (DenseNets)

This repository contains the code for DenseNet introduced in the following paper

Densely Connected Convolutional Networks (CVPR 2017, Best Paper Award)

Gao Huang*, Zhuang Liu*, Laurens van der Maaten and Kilian Weinberger (* Authors contributed equally).

and its journal version

Convolutional Networks with Dense Connectivity (TPAMI 2019)

Gao Huang, Zhuang Liu, Geoff Pleiss, Laurens van der Maaten and Kilian Weinberger.

Now with memory-efficient implementation! Please check the technical report and code for more infomation.

The code is built on fb.resnet.torch.

Citation

If you find DenseNet useful in your research, please consider citing:

@article{huang2019convolutional,
 title={Convolutional Networks with Dense Connectivity},
 author={Huang, Gao and Liu, Zhuang and Pleiss, Geoff and Van Der Maaten, Laurens and Weinberger, Kilian},
 journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
 year={2019}
 }
 
@inproceedings{huang2017densely,
  title={Densely Connected Convolutional Networks},
  author={Huang, Gao and Liu, Zhuang and van der Maaten, Laurens and Weinberger, Kilian Q },
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2017}
}

Other Implementations

Our [Caffe], Our memory-efficient [Caffe], Our memory-efficient [PyTorch], [PyTorch] by Andreas Veit, [PyTorch] by Brandon Amos, [PyTorch] by Federico Baldassarre, [MXNet] by Nicatio, [MXNet] by Xiong Lin, [MXNet] by miraclewkf, [Tensorflow] by Yixuan Li, [Tensorflow] by Laurent Mazare, [Tensorflow] by Illarion Khlestov, [Lasagne] by Jan Schlüter, [Keras] by tdeboissiere,
[Keras] by Roberto de Moura Estevão Filho, [Keras] by Somshubra Majumdar, [Chainer] by Toshinori Hanya, [Chainer] by Yasunori Kudo, [Torch 3D-DenseNet] by Barry Kui, [Keras] by Christopher Masch, [Tensorflow2] by Gaston Rios and Ulises Jeremias Cornejo Fandos.

Note that we only listed some early implementations here. If you would like to add yours, please submit a pull request.

Some Following up Projects

  1. Multi-Scale Dense Convolutional Networks for Efficient Prediction
  2. DSOD: Learning Deeply Supervised Object Detectors from Scratch
  3. CondenseNet: An Efficient DenseNet using Learned Group Convolutions
  4. Fully Convolutional DenseNets for Semantic Segmentation
  5. Pelee: A Real-Time Object Detection System on Mobile Devices

Contents

  1. Introduction
  2. Usage
  3. Results on CIFAR
  4. Results on ImageNet and Pretrained Models
  5. Updates

Introduction

DenseNet is a network architecture where each layer is directly connected to every other layer in a feed-forward fashion (within each dense block). For each layer, the feature maps of all preceding layers are treated as separate inputs whereas its own feature maps are passed on as inputs to all subsequent layers. This connectivity pattern yields state-of-the-art accuracies on CIFAR10/100 (with or without data augmentation) and SVHN. On the large scale ILSVRC 2012 (ImageNet) dataset, DenseNet achieves a similar accuracy as ResNet, but using less than half the amount of parameters and roughly half the number of FLOPs.

Figure 1: A dense block with 5 layers and growth rate 4.

densenet Figure 2: A deep DenseNet with three dense blocks.

Usage

  1. Install Torch and required dependencies like cuDNN. See the instructions here for a step-by-step guide.
  2. Clone this repo: git clone https://github.com/liuzhuang13/DenseNet.git

As an example, the following command trains a DenseNet-BC with depth L=100 and growth rate k=12 on CIFAR-10:

th main.lua -netType densenet -dataset cifar10 -batchSize 64 -nEpochs 300 -depth 100 -growthRate 12

As another example, the following command trains a DenseNet-BC with depth L=121 and growth rate k=32 on ImageNet:

th main.lua -netType densenet -dataset imagenet -data [dataFolder] -batchSize 256 -nEpochs 90 -depth 121 -growthRate 32 -nGPU 4 -nThreads 16 -optMemory 3

Please refer to fb.resnet.torch for data preparation.

DenseNet and DenseNet-BC

By default, the code runs with the DenseNet-BC architecture, which has 1x1 convolutional bottleneck layers, and compresses the number of channels at each transition layer by 0.5. To run with the original DenseNet, simply use the options -bottleneck false and -reduction 1

Memory efficient implementation (newly added feature on June 6, 2017)

There is an option -optMemory which is very useful for reducing GPU memory footprint when training a DenseNet. By default, the value is set to 2, which activates the shareGradInput function (with small modifications from here). There are two extreme memory efficient modes (-optMemory 3 or -optMemory 4) which use a customized densely connected layer. With -optMemory 4, the largest 190-layer DenseNet-BC on CIFAR can be trained on a single NVIDIA TitanX GPU (uses 8.3G of 12G) instead of fully using four GPUs with the standard (recursive concatenation) implementation .

More details about the memory efficient implementation are discussed here.

Results on CIFAR

The table below shows the results of DenseNets on CIFAR datasets. The "+" mark at the end denotes for standard data augmentation (random crop after zero-padding, and horizontal flip). For a DenseNet model, L denotes its depth and k denotes its growth rate. On CIFAR-10 and CIFAR-100 without data augmentation, a Dropout layer with drop rate 0.2 is introduced after each convolutional layer except the very first one.

Model Parameters CIFAR-10 CIFAR-10+ CIFAR-100 CIFAR-100+
DenseNet (L=40, k=12) 1.0M 7.00 5.24 27.55 24.42
DenseNet (L=100, k=12) 7.0M 5.77 4.10 23.79 20.20
DenseNet (L=100, k=24) 27.2M 5.83 3.74 23.42 19.25
DenseNet-BC (L=100, k=12) 0.8M 5.92 4.51 24.15 22.27
DenseNet-BC (L=250, k=24) 15.3M 5.19 3.62 19.64 17.60
DenseNet-BC (L=190, k=40) 25.6M - 3.46 - 17.18

Results on ImageNet and Pretrained Models

Torch

Models in the original paper

The Torch models are trained under the same setting as in fb.resnet.torch. The error rates shown are 224x224 1-crop test errors.

Network Top-1 error Torch Model
DenseNet-121 (k=32) 25.0 Download (64.5MB)
DenseNet-169 (k=32) 23.6 Download (114.4MB)
DenseNet-201 (k=32) 22.5 Download (161.8MB)
DenseNet-161 (k=48) 22.2 Download (230.8MB)

Models in the tech report

More accurate models trained with the memory efficient implementation in the technical report.

Network Top-1 error Torch Model
DenseNet-264 (k=32) 22.1 Download (256MB)
DenseNet-232 (k=48) 21.2 Download (426MB)
DenseNet-cosine-264 (k=32) 21.6 Download (256MB)
DenseNet-cosine-264 (k=48) 20.4 Download (557MB)

Caffe

https://github.com/shicai/DenseNet-Caffe.

PyTorch

PyTorch documentation on models. We would like to thank @gpleiss for this nice work in PyTorch.

Keras, Tensorflow and Theano

https://github.com/flyyufelix/DenseNet-Keras.

MXNet

https://github.com/miraclewkf/DenseNet.

Wide-DenseNet for better Time/Accuracy and Memory/Accuracy Tradeoff

If you use DenseNet as a model in your learning task, to reduce the memory and time consumption, we recommend use a wide and shallow DenseNet, following the strategy of wide residual networks. To obtain a wide DenseNet we set the depth to be smaller (e.g., L=40) and the growthRate to be larger (e.g., k=48).

We test a set of Wide-DenseNet-BCs and compared the memory and time with the DenseNet-BC (L=100, k=12) shown above. We obtained the statistics using a single TITAN X card, with batch size 64, and without any memory optimization.

Model Parameters CIFAR-10+ CIFAR-100+ Time per Iteration Memory
DenseNet-BC (L=100, k=12) 0.8M 4.51 22.27 0.156s 5452MB
Wide-DenseNet-BC (L=40, k=36) 1.5M 4.58 22.30 0.130s 4008MB
Wide-DenseNet-BC (L=40, k=48) 2.7M 3.99 20.29 0.165s 5245MB
Wide-DenseNet-BC (L=40, k=60) 4.3M 4.01 19.99 0.223s 6508MB

Obersevations:

  1. Wide-DenseNet-BC (L=40, k=36) uses less memory/time while achieves about the same accuracy as DenseNet-BC (L=100, k=12).
  2. Wide-DenseNet-BC (L=40, k=48) uses about the same memory/time as DenseNet-BC (L=100, k=12), while is much more accurate.

Thus, for practical use, we suggest picking one model from those Wide-DenseNet-BCs.

Updates

12/10/2019:

  1. Journal version (accepted by IEEE TPAMI) released.

08/23/2017:

  1. Add supporting code, so one can simply git clone and run.

06/06/2017:

  1. Support ultra memory efficient training of DenseNet with customized densely connected layer.

  2. Support memory efficient training of DenseNet with standard densely connected layer (recursive concatenation) by fixing the shareGradInput function.

05/17/2017:

  1. Add Wide-DenseNet.
  2. Add keras, tf, theano link for pretrained models.

04/20/2017:

  1. Add usage of models in PyTorch.

03/29/2017:

  1. Add the code for imagenet training.

12/03/2016:

  1. Add Imagenet results and pretrained models.
  2. Add DenseNet-BC structures.

Contact

liuzhuangthu at gmail.com
gaohuang at tsinghua.edu.cn
Any discussions, suggestions and questions are welcome!

Owner
Zhuang Liu
Zhuang Liu
NumPy로 구현한 딥러닝 라이브러리입니다. (자동 미분 지원)

Deep Learning Library only using NumPy 본 레포지토리는 NumPy 만으로 구현한 딥러닝 라이브러리입니다. 자동 미분이 구현되어 있습니다. 자동 미분 자동 미분은 미분을 자동으로 계산해주는 기능입니다. 아래 코드는 자동 미분을 활용해 역전파

조준희 17 Aug 16, 2022
Multi Task Vision and Language

12-in-1: Multi-Task Vision and Language Representation Learning Please cite the following if you use this code. Code and pre-trained models for 12-in-

Facebook Research 712 Dec 19, 2022
Auto-Encoding Score Distribution Regression for Action Quality Assessment

DAE-AQA It is an open source program reference to paper Auto-Encoding Score Distribution Regression for Action Quality Assessment. 1.Introduction DAE

13 Nov 16, 2022
TensorFlow Tutorials with YouTube Videos

TensorFlow Tutorials Original repository on GitHub Original author is Magnus Erik Hvass Pedersen Introduction These tutorials are intended for beginne

9.1k Jan 02, 2023
DeepVoxels is an object-specific, persistent 3D feature embedding.

DeepVoxels is an object-specific, persistent 3D feature embedding. It is found by globally optimizing over all available 2D observations of

Vincent Sitzmann 196 Dec 25, 2022
Python library for tracking human heads with FLAME (a 3D morphable head model)

Video Head Tracker 3D tracking library for human heads based on FLAME (a 3D morphable head model). The tracking algorithm is inspired by face2face. It

61 Dec 25, 2022
An Efficient Implementation of Analytic Mesh Algorithm for 3D Iso-surface Extraction from Neural Networks

AnalyticMesh Analytic Marching is an exact meshing solution from neural networks. Compared to standard methods, it completely avoids geometric and top

Karbo 45 Dec 21, 2022
The Official Repository for "Generalized OOD Detection: A Survey"

Generalized Out-of-Distribution Detection: A Survey 1. Overview This repository is with our survey paper: Title: Generalized Out-of-Distribution Detec

Jingkang Yang 338 Jan 03, 2023
Learn about Spice.ai with in-depth samples

Samples Learn about Spice.ai with in-depth samples ServerOps - Learn when to run server maintainance during periods of low load Gardener - Intelligent

Spice.ai 16 Mar 23, 2022
Gym for multi-agent reinforcement learning

PettingZoo is a Python library for conducting research in multi-agent reinforcement learning, akin to a multi-agent version of Gym. Our website, with

Farama Foundation 1.6k Jan 09, 2023
Neural Ensemble Search for Performant and Calibrated Predictions

Neural Ensemble Search Introduction This repo contains the code accompanying the paper: Neural Ensemble Search for Performant and Calibrated Predictio

AutoML-Freiburg-Hannover 26 Dec 12, 2022
Python based Advanced AI Assistant

Knick is a virtual artificial intelligence project, fully developed in python. The objective of this project is to develop a virtual assistant that can handle our minor, intermediate as well as heavy

19 Nov 15, 2022
Karate Club: An API Oriented Open-source Python Framework for Unsupervised Learning on Graphs (CIKM 2020)

Karate Club is an unsupervised machine learning extension library for NetworkX. Please look at the Documentation, relevant Paper, Promo Video, and Ext

Benedek Rozemberczki 1.8k Jan 07, 2023
网络协议2天集训

网络协议2天集训 抓包工具安装 Wireshark wireshark下载地址 Tcpdump CentOS yum install tcpdump -y Ubuntu apt-get install tcpdump -y k8s抓包测试环境 查看虚拟网卡veth pair 查看

120 Dec 12, 2022
P-Tuning v2: Prompt Tuning Can Be Comparable to Finetuning Universally Across Scales and Tasks

P-tuning v2 P-Tuning v2: Prompt Tuning Can Be Comparable to Finetuning Universally Across Scales and Tasks An optimized prompt tuning strategy achievi

THUDM 540 Dec 30, 2022
Repository for XLM-T, a framework for evaluating multilingual language models on Twitter data

This is the XLM-T repository, which includes data, code and pre-trained multilingual language models for Twitter. XLM-T - A Multilingual Language Mode

Cardiff NLP 112 Dec 27, 2022
The official codes for the ICCV2021 presentation "Uniformity in Heterogeneity: Diving Deep into Count Interval Partition for Crowd Counting"

UEPNet (ICCV2021 Poster Presentation) This repository contains codes for the official implementation in PyTorch of UEPNet as described in Uniformity i

Tencent YouTu Research 15 Dec 14, 2022
PyTorch implementation of ShapeConv: Shape-aware Convolutional Layer for RGB-D Indoor Semantic Segmentation.

Shape-aware Convolutional Layer (ShapeConv) PyTorch implementation of ShapeConv: Shape-aware Convolutional Layer for RGB-D Indoor Semantic Segmentatio

Hanchao Leng 82 Dec 29, 2022
Sequence to Sequence Models with PyTorch

Sequence to Sequence models with PyTorch This repository contains implementations of Sequence to Sequence (Seq2Seq) models in PyTorch At present it ha

Sandeep Subramanian 708 Dec 19, 2022
GBIM(Gesture-Based Interaction map)

手势交互地图 GBIM(Gesture-Based Interaction map),基于视觉深度神经网络的交互地图,通过电脑摄像头观察使用者的手势变化,进而控制地图进行简单的交互。网络使用PaddleX提供的轻量级模型PPYOLO Tiny以及MobileNet V3 small,使得整个模型大小约10MB左右,即使在CPU下也能快速定位和识别手势。

8 Feb 10, 2022