All public open-source implementations of convnets benchmarks

Overview

convnet-benchmarks

Easy benchmarking of all public open-source implementations of convnets. A summary is provided in the section below.

Machine: 6-core Intel Core i7-5930K CPU @ 3.50GHz + NVIDIA Titan X + Ubuntu 14.04 x86_64

Imagenet Winners Benchmarking

I pick some popular imagenet models, and I clock the time for a full forward + backward pass. I average my times over 10 runs. I ignored dropout and softmax layers.

Notation

Input is described as {batch_size}x{num_filters}x{filter_width}x{filter_height}. Where batch_size is the number of images used in a minibatch, num_filters is the number of channels in an image, filter_width is the width of the image, and filter_height is the height of the image.

One small note:

The CuDNN benchmarks are done using Torch bindings. One can also do the same via Caffe bindings or bindings of any other library. This note is here to clarify that Caffe (native) and Torch (native) are the convolution kernels which are present as a default fallback. Some of the frameworks like TensorFlow and Chainer are benchmarked with CuDNN, but it is not explicitly mentioned, and hence one might think that these frameworks as a whole are faster, than for example Caffe, which might not be the case.

AlexNet (One Weird Trick paper) - Input 128x3x224x224

Library Class Time (ms) forward (ms) backward (ms)
CuDNN[R4]-fp16 (Torch) cudnn.SpatialConvolution 71 25 46
Nervana-neon-fp16 ConvLayer 78 25 52
CuDNN[R4]-fp32 (Torch) cudnn.SpatialConvolution 81 27 53
TensorFlow conv2d 81 26 55
Nervana-neon-fp32 ConvLayer 87 28 58
fbfft (Torch) fbnn.SpatialConvolution 104 31 72
Chainer Convolution2D 177 40 136
cudaconvnet2* ConvLayer 177 42 135
CuDNN[R2] * cudnn.SpatialConvolution 231 70 161
Caffe (native) ConvolutionLayer 324 121 203
Torch-7 (native) SpatialConvolutionMM 342 132 210
CL-nn (Torch) SpatialConvolutionMM 963 388 574
Caffe-CLGreenTea ConvolutionLayer 1442 210 1232

Overfeat [fast] - Input 128x3x231x231

Library Class Time (ms) forward (ms) backward (ms)
Nervana-neon-fp16 ConvLayer 176 58 118
Nervana-neon-fp32 ConvLayer 211 69 141
CuDNN[R4]-fp16 (Torch) cudnn.SpatialConvolution 242 86 156
CuDNN[R4]-fp32 (Torch) cudnn.SpatialConvolution 268 94 174
TensorFlow conv2d 279 90 189
fbfft (Torch) SpatialConvolutionCuFFT 342 114 227
Chainer Convolution2D 620 135 484
cudaconvnet2* ConvLayer 723 176 547
CuDNN[R2] * cudnn.SpatialConvolution 810 234 576
Caffe ConvolutionLayer 823 355 468
Torch-7 (native) SpatialConvolutionMM 878 379 499
CL-nn (Torch) SpatialConvolutionMM 963 388 574
Caffe-CLGreenTea ConvolutionLayer 2857 616 2240

OxfordNet [Model-A] - Input 64x3x224x224

Library Class Time (ms) forward (ms) backward (ms)
Nervana-neon-fp16 ConvLayer 254 82 171
Nervana-neon-fp32 ConvLayer 320 103 217
CuDNN[R4]-fp16 (Torch) cudnn.SpatialConvolution 471 140 331
CuDNN[R4]-fp32 (Torch) cudnn.SpatialConvolution 529 162 366
TensorFlow conv2d 540 158 382
Chainer Convolution2D 885 251 632
fbfft (Torch) SpatialConvolutionCuFFT 1092 355 737
cudaconvnet2* ConvLayer 1229 408 821
CuDNN[R2] * cudnn.SpatialConvolution 1099 342 757
Caffe ConvolutionLayer 1068 323 745
Torch-7 (native) SpatialConvolutionMM 1105 350 755
CL-nn (Torch) SpatialConvolutionMM 3437 875 2562
Caffe-CLGreenTea ConvolutionLayer 5620 988 4632

GoogleNet V1 - Input 128x3x224x224

Library Class Time (ms) forward (ms) backward (ms)
Nervana-neon-fp16 ConvLayer 230 72 157
Nervana-neon-fp32 ConvLayer 270 84 186
TensorFlow conv2d 445 135 310
CuDNN[R4]-fp16 (Torch) cudnn.SpatialConvolution 462 112 349
CuDNN[R4]-fp32 (Torch) cudnn.SpatialConvolution 470 130 340
Chainer Convolution2D 687 189 497
Caffe ConvolutionLayer 1935 786 1148
CL-nn (Torch) SpatialConvolutionMM 7016 3027 3988
Caffe-CLGreenTea ConvolutionLayer 9462 746 8716

Layer-wise Benchmarking (Last Updated April 2015)

Spatial Convolution layer (3D input 3D output, densely connected)

forward + backprop (wrt input and weights)
Original Library Class/Function Benchmarked Time (ms) forward (ms) backward (ms)
fbfft SpatialConvolutionCuFFT 256 101 155
cuda-convnet2 * ConvLayer 977 201 776
cuda-convnet** pylearn2.cuda_convnet 1077 312 765
CuDNN R2 * cudnn.SpatialConvolution 1019 269 750
Theano CorrMM 1225 407 818
Caffe ConvolutionLayer 1231 396 835
Torch-7 SpatialConvolutionMM 1265 418 877
DeepCL ConvolutionLayer 6280 2648 3632
cherry-picking**** best per layer 235 79 155

This table is NOT UPDATED For TITAN-X. These numbers below were on Titan Black and are here only for informational and legacy purposes.

Original Library Class/Function Benchmarked Time (ms) forward (ms) backward (ms)
Theano (experimental)*** conv2d_fft 1178 304 874
Torch-7 nn.SpatialConvolutionBHWD 1892 581 1311
ccv ccv_convnet_layer 809+bw 809
Theano (legacy) conv2d 70774 3833 66941
  • * indicates that the library was tested with Torch bindings of the specific kernels.
  • ** indicates that the library was tested with Pylearn2 bindings.
  • *** This is an experimental module which used FFT to calculate convolutions. It uses a lot of memory according to @benanne
  • **** The last row shows results obtainable when choosing the best-performing library for each layer.
  • L1 - Input: 128x128 Batch-size 128, Feature maps: 3->96, Kernel Size: 11x11, Stride: 1x1
  • L2 - Input: 64x64 Batch-size 128, Feature maps: 64->128, Kernel Size: 9x9, Stride: 1x1
  • L3 - Input: 32x32 Batch-size 128, Feature maps: 128->128, Kernel Size: 9x9, Stride: 1x1
  • L4 - Input: 16x16 Batch-size 128, Feature maps: 128->128, Kernel Size: 7x7, Stride: 1x1
  • L5 - Input: 13x13 Batch-size 128, Feature maps: 384->384, Kernel Size: 3x3, Stride: 1x1
  • The table is ranked according to the total time forward+backward calls for layers (L1 + L2 + L3 + L4 + L5)
Breakdown
forward

Columns L1, L2, L3, L4, L5, Total are times in milliseconds

Original Library Class/Function Benchmarked L1 L2 L3 L4 L5 Total
fbfft SpatialConvolutionCuFFT 57 27 6 2 9 101
cuda-convnet2 * ConvLayer 36 113 40 4 8 201
cuda-convnet** pylearn2.cuda_convnet 38 183 68 7 16 312
CuDNN R2 cudnn.SpatialConvolution 56 143 53 6 11 269
Theano CorrMM 91 143 121 24 28 407
Caffe ConvolutionLayer 93 136 116 24 27 396
Torch-7 nn.SpatialConvolutionMM 94 149 123 24 28 418
DeepCL ConvolutionLayer 738 1241 518 47 104 2648
cherry-picking**** best per layer 36 27 6 2 8 79
backward (gradInput + gradWeight)

Columns L1, L2, L3, L4, L5, Total are times in milliseconds

Original Library Class/Function Benchmarked L1 L2 L3 L4 L5 Total
fbfft SpatialConvolutionCuFFT 76 45 12 4 18 155
cuda-convnet2 * ConvLayer 103 467 162 15 29 776
cuda-convnet** pylearn2.cuda_convnet 136 433 147 15 34 765
CuDNN R2 cudnn.SpatialConvolution 139 401 159 19 32 750
Theano CorrMM 179 405 174 29 31 818
Caffe ConvolutionLayer 200 405 172 28 30 835
Torch-7 nn.SpatialConvolutionMM 206 432 178 29 32 877
DeepCL ConvolutionLayer 484 2144 747 59 198 3632
cherry-picking**** best per layer 76 45 12 4 18 155
Owner
Soumith Chintala
/\︿╱\ _________________________________ \0_ 0 /╱\╱____________________________ \▁︹_/
Soumith Chintala
Logsig-RNN: a novel network for robust and efficient skeleton-based action recognition

GCN_LogsigRNN This repository holds the codebase for the paper: Logsig-RNN: a novel network for robust and efficient skeleton-based action recognition

7 Oct 14, 2022
New AidForBlind - Various Libraries used like OpenCV and other mentioned in Requirements.txt

AidForBlind Recommended PyCharm IDE Various Libraries used like OpenCV and other

Aalhad Chandewar 1 Jan 13, 2022
Repo for flood prediction using LSTMs and HAND

Abstract Every year, floods cause billions of dollars’ worth of damages to life, crops, and property. With a proper early flood warning system in plac

1 Oct 27, 2021
Multi-task Self-supervised Object Detection via Recycling of Bounding Box Annotations (CVPR, 2019)

Multi-task Self-supervised Object Detection via Recycling of Bounding Box Annotations (CVPR 2019) To make better use of given limited labels, we propo

126 Sep 13, 2022
Easy to use Audio Tagging in PyTorch

Audio Classification, Tagging & Sound Event Detection in PyTorch Progress: Fine-tune on audio classification Fine-tune on audio tagging Fine-tune on s

sithu3 15 Dec 22, 2022
Variational autoencoder for anime face reconstruction

VAE animeface Variational autoencoder for anime face reconstruction Introduction This repository is an exploratory example to train a variational auto

Minzhe Zhang 2 Dec 11, 2021
Unofficial PyTorch implementation of TokenLearner by Google AI

tokenlearner-pytorch Unofficial PyTorch implementation of TokenLearner by Ryoo et al. from Google AI (abs, pdf) Installation You can install TokenLear

Rishabh Anand 46 Dec 20, 2022
Image Segmentation using U-Net, U-Net with skip connections and M-Net architectures

Brain-Image-Segmentation Segmentation of brain tissues in MRI image has a number of applications in diagnosis, surgical planning, and treatment of bra

Angad Bajwa 8 Oct 27, 2022
This project is for a Twitter bot that monitors a bird feeder in my backyard. Any detected birds are identified and posted to Twitter.

Backyard Birdbot Introduction This is a silly hobby project to use existing ML models to: Detect any birds sighted by a webcam Identify whic

Chi Young Moon 71 Dec 25, 2022
Complete the code of prefix-tuning in low data setting

Prefix Tuning Note: 作者在论文中提到使用真实的word去初始化prefix的操作(Initializing the prefix with activations of real words,significantly improves generation)。我在使用作者提供的

Andrew Zeng 4 Jul 11, 2022
Self-Supervised Learning of Event-based Optical Flow with Spiking Neural Networks

Self-Supervised Learning of Event-based Optical Flow with Spiking Neural Networks Work accepted at NeurIPS'21 [paper, video]. If you use this code in

TU Delft 43 Dec 07, 2022
DECAF: Deep Extreme Classification with Label Features

DECAF DECAF: Deep Extreme Classification with Label Features @InProceedings{Mittal21, author = "Mittal, A. and Dahiya, K. and Agrawal, S. and Sain

46 Nov 06, 2022
Iowa Project - My second project done at General Assembly, focused on feature engineering and understanding Linear Regression as a concept

Project 2 - Ames Housing Data and Kaggle Challenge PROBLEM STATEMENT Inferring or Predicting? What's more valuable for a housing model? When creating

Adam Muhammad Klesc 1 Jan 03, 2022
Implementation of paper "DeepTag: A General Framework for Fiducial Marker Design and Detection"

Implementation of paper DeepTag: A General Framework for Fiducial Marker Design and Detection. Project page: https://herohuyongtao.github.io/research/

Yongtao Hu 46 Dec 12, 2022
An unofficial personal implementation of UM-Adapt, specifically to tackle joint estimation of panoptic segmentation and depth prediction for autonomous driving datasets.

Semisupervised Multitask Learning This repository is an unofficial and slightly modified implementation of UM-Adapt[1] using PyTorch. This code primar

Abhinav Atrishi 11 Nov 25, 2022
SOLO and SOLOv2 for instance segmentation, ECCV 2020 & NeurIPS 2020.

SOLO: Segmenting Objects by Locations This project hosts the code for implementing the SOLO algorithms for instance segmentation. SOLO: Segmenting Obj

Xinlong Wang 1.5k Dec 31, 2022
Project page for End-to-end Recovery of Human Shape and Pose

End-to-end Recovery of Human Shape and Pose Angjoo Kanazawa, Michael J. Black, David W. Jacobs, Jitendra Malik CVPR 2018 Project Page Requirements Pyt

1.4k Dec 29, 2022
implementation of paper - You Only Learn One Representation: Unified Network for Multiple Tasks

YOLOR implementation of paper - You Only Learn One Representation: Unified Network for Multiple Tasks To reproduce the results in the paper, please us

Kin-Yiu, Wong 1.8k Jan 04, 2023
Pytorch implementation of the paper "Optimization as a Model for Few-Shot Learning"

Optimization as a Model for Few-Shot Learning This repo provides a Pytorch implementation for the Optimization as a Model for Few-Shot Learning paper.

Albert Berenguel Centeno 238 Jan 04, 2023
Unofficial implementation of Proxy Anchor Loss for Deep Metric Learning

Proxy Anchor Loss for Deep Metric Learning Unofficial pytorch, tensorflow and mxnet implementations of Proxy Anchor Loss for Deep Metric Learning. Not

Geonmo Gu 3 Jun 09, 2021