Reproduce ResNet-v2(Identity Mappings in Deep Residual Networks) with MXNet

Related tags

Deep LearningResNet
Overview

Reproduce ResNet-v2 using MXNet

Requirements

  • Install MXNet on a machine with CUDA GPU, and it's better also installed with cuDNN v5
  • Please fix the randomness if you want to train your own model and using this pull request

Trained models

The trained ResNet models achieve better error rates than the original ResNet-v1 models.

ImageNet 1K

Imagenet 1000 class dataset with 1.2 million images.

single center crop (224x224) validation error rate(%)

Network Top-1 error Top-5 error Traind Model
ResNet-18 30.48 10.92 data.dmlc.ml
ResNet-34 27.20 8.86 data.dmlc.ml
ResNet-50 24.39 7.24 data.dmlc.ml
ResNet-101 22.68 6.58 data.dmlc.ml
ResNet-152 22.25 6.42 data.dmlc.ml
ResNet-200 22.14 6.16 data.dmlc.ml

ImageNet 11K:

Full imagenet dataset: fall11_whole.tar from http://www.image-net.org/download-images.

We removed classes with less than 500 images. The filtered dataset contains 11221 classes and 12.4 millions images. We randomly pick 50 images from each class as the validation set. The split is available at http://data.dmlc.ml/mxnet/models/imagenet-11k/

Network Top-1 error Top-5 error Traind Model
ResNet-200 58.4 28.8

cifar10: single crop validation error rate(%):

Network top-1
ResNet-164 4.68

Training Curve

The following curve is ResNet-v2 trainined on imagenet-1k, all the training detail you can found here, which include gpu information, lr schedular, batch-size etc, and you can also see the training speed with the corresponding logs.

you can get the curve by run:
cd log && python plot_curve.py --logs=resnet-18.log,resnet-34.log,resnet-50.log,resnet-101.log,resnet-152.log,resnet-200.log

How to Train

imagenet

first you should prepare the train.lst and val.lst, you can generate this list files by yourself(please ref.make-the-image-list, and do not forget to shuffle the list files!), or just download the provided version from here.

then you can create the *.rec file, i recommend use this cmd parameters:

$im2rec_path train.lst train/ data/imagenet/train_480_q90.rec resize=480 quality=90

set resize=480 and quality=90(quality=100 will be best i think:)) here may use more disk memory(about ~103G), but this is very useful with scale augmentation during training[1][2], and can help reproducing a good result.

because you are training imagenet , so we should set data-type = imagenet, then the training cmd is like this(here i use 6 gpus for training):

python -u train_resnet.py --data-dir data/imagenet \
--data-type imagenet --depth 50 --batch-size 256  --gpus=0,1,2,3,4,5

change depth to different number to support different model, currently support ResNet-18, ResNet-34, ResNet-50, ResNet-101, ResNet-152, ResNet-200.

cifar10

same as above, first you should use im2rec to create the .rec file, then training with cmd like this:

python -u train_resnet.py --data-dir data/cifar10 --data-type cifar10 \
  --depth 164 --batch-size 128 --num-examples 50000 --gpus=0,1

change depth when training different model, only support(depth-2)%9==0, such as RestNet-110, ResNet-164, ResNet-1001...

retrain

When training large dataset(like imagenet), it's better for us to change learning rate manually, or the training is killed by some other reasons, so retrain is very important. the code here support retrain, suppose you want to retrain your resnet-50 model from epoch 70 and want to change lr=0.0005, wd=0.001, batch-size=256 using 8gpu, then you can try this cmd:

python -u train_resnet.py --data-dir data/imagenet --data-type imagenet --depth 50 --batch-size 256 \
--gpus=0,1,2,3,4,5,6,7 --model-load-epoch=70 --lr 0.0005 --wd 0.001 --retrain

Notes

  • it's better training the model in imagenet with epoch > 110, because this will lead better result.
  • when epoch is about 95, cancel the scale/color/aspect augmentation during training, this can be done by only comment out 6 lines of the code, like this:
train = mx.io.ImageRecordIter(
        # path_imgrec         = os.path.join(args.data_dir, "train_480_q90.rec"),
        path_imgrec         = os.path.join(args.data_dir, "train_256_q90.rec"),
        label_width         = 1,
        data_name           = 'data',
        label_name          = 'softmax_label',
        data_shape          = (3, 32, 32) if args.data_type=="cifar10" else (3, 224, 224),
        batch_size          = args.batch_size,
        pad                 = 4 if args.data_type == "cifar10" else 0,
        fill_value          = 127,  # only used when pad is valid
        rand_crop           = True,
        # max_random_scale    = 1.0 if args.data_type == "cifar10" else 1.0,  # 480
        # min_random_scale    = 1.0 if args.data_type == "cifar10" else 0.533,  # 256.0/480.0
        # max_aspect_ratio    = 0 if args.data_type == "cifar10" else 0.25,
        # random_h            = 0 if args.data_type == "cifar10" else 36,  # 0.4*90
        # random_s            = 0 if args.data_type == "cifar10" else 50,  # 0.4*127
        # random_l            = 0 if args.data_type == "cifar10" else 50,  # 0.4*127
        rand_mirror         = True,
        shuffle             = True,
        num_parts           = kv.num_workers,
        part_index          = kv.rank)

but you should prepare one train_256_q90.rec using im2rec like:

$im2rec_path train.lst train/ data/imagenet/train_256_q90.rec resize=256 quality=90

cancel this scale/color/aspect augmentation can be done easily by using --aug-level=1 in your cmd.

  • it's better for running longer than 30 epoch before first decrease the lr(such as 60), so you may decide the epoch number by observe the val-acc curve, and set lr with retrain.

Training ResNet-200 by only one gpu with 'dark knowledge' of mxnet

you can training ResNet-200 or even ResNet-1000 on imaget with only one gpu! for example, we can train ResNet-200 with batch-size=128 on one gpu(=12G), or if your gpu memory is less than 12G, you should decrease the batch-size by a little. here is the way of how to using 'dark knowledge' of mxnet:

when turn on memonger, the trainning speed will be about 25% slower, but we can training more depth network, have fun!

ResNet-v2 vs ResNet-v1

Does ResNet-v2 always achieve better result than ResNet-v1 on imagnet? The answer is NO, ResNet-v2 has no advantage or even has disadvantage than ResNet-v1 when depth<152, we can get the following result from paper[2].(why?)

ImageNet: single center crop validation error rate(%)

Network crop-size top-1 top-5
ResNet-101-v1 224x224 23.6 7.1
ResNet-101-v2 224x224 24.6 7.5
ResNet-152-v1 320x320 21.3 5.5
ResNet-152-v2 320x320 21.1 5.5

we can see that:

  • when depth=101, ResNet-v2 is 1% worse than ResNet-v1 on top-1 and 0.4% worse on top-5.
  • when depth=152, ResNet-v2 is only 0.2% better than ResNet-v1 on top-1 and owns the same performance on top-5 even when crop-size=320x320.

How to use Trained Models

we can use the pre-trained model to classify one input image, the step is easy:

  • download the pre-trained model form data.dml.ml and put it into the predict directory.
  • cd predict and run python -u predict.py --img test.jpg --prefix resnet-50 --gpu 0, this means you want to recgnition test.jpg using model resnet-50-0000.params and gpu 0, then it will output the classification result.

Reference

[1] Kaiming He, et al. "Deep Residual Learning for Image Recognition." arXiv arXiv:1512.03385 (2015).
[2] Kaiming He, et al. "Identity Mappings in Deep Residual Networks" arXiv:1603.05027 (2016).
[3] caffe official training code and model, https://github.com/KaimingHe/deep-residual-networks
[4] torch training code and model provided by facebook, https://github.com/facebook/fb.resnet.torch
[5] MXNet resnet-v1 cifar10 examples,https://github.com/dmlc/mxnet/blob/master/example/image-classification/train_cifar10_resnet.py

Owner
Wei Wu
Wei Wu
Empowering journalists and whistleblowers

Onymochat Empowering journalists and whistleblowers Onymochat is an end-to-end encrypted, decentralized, anonymous chat application. You can also host

Samrat Dutta 19 Sep 02, 2022
First-Order Probabilistic Programming Language

FOPPL: A First-Order Probabilistic Programming Language This is an implementation of FOPPL, an S-expression based probabilistic programming language d

Renato Costa 23 Dec 20, 2022
FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning

FEDn is an open-source, modular and ML-framework agnostic framework for Federated Machine Learning (FedML) developed and maintained by Scaleout Systems. FEDn enables highly scalable cross-silo and cr

Scaleout 75 Nov 09, 2022
Lux AI environment interface for RLlib multi-agents

Lux AI interface to RLlib MultiAgentsEnv For Lux AI Season 1 Kaggle competition. LuxAI repo RLlib-multiagents docs Kaggle environments repo Please let

Jaime 12 Nov 07, 2022
To build a regression model to predict the concrete compressive strength based on the different features in the training data.

Cement-Strength-Prediction Problem Statement To build a regression model to predict the concrete compressive strength based on the different features

Ashish Kumar 4 Jun 11, 2022
Collision risk estimation using stochastic motion models

collision_risk_estimation Collision risk estimation using stochastic motion models. This is a new approach, based on stochastic models, to predict the

Unmesh 7 Jun 26, 2022
Python Fanduel API (2021) - Lineup Automation

Southpaw is a python package that provides access to the Fanduel API. Optimize your DFS experience by programmatically updating your lineups, analyzin

Brandin Canfield 13 Jan 04, 2023
PyTorch Lightning implementation of Automatic Speech Recognition

lasr Lightening Automatic Speech Recognition An MIT License ASR research library, built on PyTorch-Lightning, for developing end-to-end ASR models. In

Soohwan Kim 40 Sep 19, 2022
The Agriculture Domain of ERPNext comes with features to record crops and land

Agriculture The Agriculture Domain of ERPNext comes with features to record crops and land, track plant, soil, water, weather analytics, and even trac

Frappe 21 Jan 02, 2023
Reimplementation of the paper `Human Attention Maps for Text Classification: Do Humans and Neural Networks Focus on the Same Words? (ACL2020)`

Human Attention for Text Classification Re-implementation of the paper Human Attention Maps for Text Classification: Do Humans and Neural Networks Foc

Shunsuke KITADA 15 Dec 13, 2021
PyTorch Language Model for 1-Billion Word (LM1B / GBW) Dataset

PyTorch Large-Scale Language Model A Large-Scale PyTorch Language Model trained on the 1-Billion Word (LM1B) / (GBW) dataset Latest Results 39.98 Perp

Ryan Spring 114 Nov 04, 2022
Code for Transformer Hawkes Process, ICML 2020.

Transformer Hawkes Process Source code for Transformer Hawkes Process (ICML 2020). Run the code Dependencies Python 3.7. Anaconda contains all the req

Simiao Zuo 111 Dec 26, 2022
yolov5 deepsort 行人 车辆 跟踪 检测 计数

yolov5 deepsort 行人 车辆 跟踪 检测 计数 实现了 出/入 分别计数。 默认是 南/北 方向检测,若要检测不同位置和方向,可在 main.py 文件第13行和21行,修改2个polygon的点。 默认检测类别:行人、自行车、小汽车、摩托车、公交车、卡车。 检测类别可在 detect

554 Dec 30, 2022
Picasso: A CUDA-based Library for Deep Learning over 3D Meshes

The Picasso Library is intended for complex real-world applications with large-scale surfaces, while it also performs impressively on the small-scale applications over synthetic shape manifolds. We h

97 Dec 01, 2022
Using PyTorch Perform intent classification using three different models to see which one is better for this task

Using PyTorch Perform intent classification using three different models to see which one is better for this task

Yoel Graumann 1 Feb 14, 2022
Official Implementation of "DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization."

DialogLM Code for AAAI 2022 paper: DialogLM: Pre-trained Model for Long Dialogue Understanding and Summarization. Pre-trained Models We release two ve

Microsoft 92 Dec 19, 2022
Object detection evaluation metrics using Python.

Object detection evaluation metrics using Python.

Louis Facun 2 Sep 06, 2022
Repository for RNNs using TensorFlow and Keras - LSTM and GRU Implementation from Scratch - Simple Classification and Regression Problem using RNNs

RNN 01- RNN_Classification Simple RNN training for classification task of 3 signal: Sine, Square, Triangle. 02- RNN_Regression Simple RNN training for

Nahid Ebrahimian 13 Dec 13, 2022
:fire: 2D and 3D Face alignment library build using pytorch

Face Recognition Detect facial landmarks from Python using the world's most accurate face alignment network, capable of detecting points in both 2D an

Adrian Bulat 6k Dec 31, 2022
Source code release of the paper: Knowledge-Guided Deep Fractal Neural Networks for Human Pose Estimation.

GNet-pose Project Page: http://guanghan.info/projects/guided-fractal/ UPDATE 9/27/2018: Prototxts and model that achieved 93.9Pck on LSP dataset. http

Guanghan Ning 83 Nov 21, 2022