Deep Networks with Recurrent Layer Aggregation

Related tags

Deep LearningRLANet
Overview

RLA-Net: Recurrent Layer Aggregation

Recurrence along Depth: Deep Networks with Recurrent Layer Aggregation

This is an implementation of RLA-Net (accept by NeurIPS-2021, paper).

RLANet

Introduction

This paper introduces a concept of layer aggregation to describe how information from previous layers can be reused to better extract features at the current layer. While DenseNet is a typical example of the layer aggregation mechanism, its redundancy has been commonly criticized in the literature. This motivates us to propose a very light-weighted module, called recurrent layer aggregation (RLA), by making use of the sequential structure of layers in a deep CNN. Our RLA module is compatible with many mainstream deep CNNs, including ResNets, Xception and MobileNetV2, and its effectiveness is verified by our extensive experiments on image classification, object detection and instance segmentation tasks. Specifically, improvements can be uniformly observed on CIFAR, ImageNet and MS COCO datasets, and the corresponding RLA-Nets can surprisingly boost the performances by 2-3% on the object detection task. This evidences the power of our RLA module in helping main CNNs better learn structural information in images.

RLA module

RLA_module

Changelog

  • 2021/04/06 Upload RLA-ResNet model.
  • 2021/04/16 Upload RLA-MobileNetV2 (depthwise separable conv version) model.
  • 2021/09/29 Upload all the ablation study on ImageNet.
  • 2021/09/30 Upload mmdetection files.
  • 2021/10/01 Upload pretrained weights.

Installation

Requirements

Our environments

  • OS: Linux Red Hat 4.8.5
  • CUDA: 10.2
  • Toolkit: Python 3.8.5, PyTorch 1.7.0, torchvision 0.8.1
  • GPU: Tesla V100

Please refer to get_started.md for more details about installation.

Quick Start

Train with ResNet

- Use single node or multi node with multiple GPUs

Use multi-processing distributed training to launch N processes per node, which has N GPUs. This is the fastest way to use PyTorch for either single node or multi node data parallel training.

python train.py -a {model_name} --b {batch_size} --multiprocessing-distributed --world-size 1 --rank 0 {imagenet-folder with train and val folders}

- Specify single GPU or multiple GPUs

CUDA_VISIBLE_DEVICES={device_ids} python train.py -a {model_name} --b {batch_size} --multiprocessing-distributed --world-size 1 --rank 0 {imagenet-folder with train and val folders}

Testing

To evaluate the best model

python train.py -a {model_name} --b {batch_size} --multiprocessing-distributed --world-size 1 --rank 0 --resume {path to the best model} -e {imagenet-folder with train and val folders}

Visualizing the training result

To generate acc_plot, loss_plot

python eval_visual.py --log-dir {log_folder}

Train with MobileNet_v2

It is same with above ResNet replace train.py by train_light.py.

Compute the parameters and FLOPs

If you have install thop, you can paras_flops.py to compute the parameters and FLOPs of our models. The usage is below:

python paras_flops.py -a {model_name}

More examples are shown in examples.md.

MMDetection

After installing MMDetection (see get_started.md), then do the following steps:

  • put the file resnet_rla.py in the folder './mmdetection/mmdet/models/backbones/', and do not forget to import the model in the init.py file.
  • put the config files (e.g. faster_rcnn_r50rla_fpn.py) in the folder './mmdetection/configs/base/models/'
  • put the config files (e.g. faster_rcnn_r50rla_fpn_1x_coco.py) in the folder './mmdetection/configs/faster_rcnn'

Note that the config files of the latest version of MMDetection are a little different, please modify the config files according to the latest format.

Experiments

ImageNet

Model Param. FLOPs Top-1 err.(%) Top-5 err.(%) BaiduDrive(models) Extract code GoogleDrive
RLA-ResNet50 24.67M 4.17G 22.83 6.58 resnet50_rla_2283 5lf1 resnet50_rla_2283
RLA-ECANet50 24.67M 4.18G 22.15 6.11 ecanet50_rla_2215 xrfo ecanet50_rla_2215
RLA-ResNet101 42.92M 7.79G 21.48 5.80 resnet101_rla_2148 zrv5 resnet101_rla_2148
RLA-ECANet101 42.92M 7.80G 21.00 5.51 ecanet101_rla_2100 vhpy ecanet101_rla_2100
RLA-MobileNetV2 3.46M 351.8M 27.62 9.18 dsrla_mobilenetv2_k32_2762 g1pm dsrla_mobilenetv2_k32_2762
RLA-ECA-MobileNetV2 3.46M 352.4M 27.07 8.89 dsrla_mobilenetv2_k32_eca_2707 9orl dsrla_mobilenetv2_k32_eca_2707

COCO 2017

Model AP AP_50 AP_75 BaiduDrive(models) Extract code GoogleDrive
Fast_R-CNN_resnet50_rla 38.8 59.6 42.0 faster_rcnn_r50rla_fpn_1x_coco_388 q5c8 faster_rcnn_r50rla_fpn_1x_coco_388
Fast_R-CNN_ecanet50_rla 39.8 61.2 43.2 faster_rcnn_r50rlaeca_fpn_1x_coco_398 f5xs faster_rcnn_r50rlaeca_fpn_1x_coco_398
Fast_R-CNN_resnet101_rla 41.2 61.8 44.9 faster_rcnn_r101rla_fpn_1x_coco_412 0ri3 faster_rcnn_r101rla_fpn_1x_coco_412
Fast_R-CNN_ecanet101_rla 42.1 63.3 46.1 faster_rcnn_r101rlaeca_fpn_1x_coco_421 cpug faster_rcnn_r101rlaeca_fpn_1x_coco_421
RetinaNet_resnet50_rla 37.9 57.0 40.8 retinanet_r50rla_fpn_1x_coco_379 lahj retinanet_r50rla_fpn_1x_coco_379
RetinaNet_ecanet50_rla 39.0 58.7 41.7 retinanet_r50rlaeca_fpn_1x_coco_390 adyd retinanet_r50rlaeca_fpn_1x_coco_390
RetinaNet_resnet101_rla 40.3 59.8 43.5 retinanet_r101rla_fpn_1x_coco_403 p8y0 retinanet_r101rla_fpn_1x_coco_403
RetinaNet_ecanet101_rla 41.5 61.6 44.4 retinanet_r101rlaeca_fpn_1x_coco_415 hdqx retinanet_r101rlaeca_fpn_1x_coco_415
Mask_R-CNN_resnet50_rla 39.5 60.1 43.3 mask_rcnn_r50rla_fpn_1x_coco_395 j1x6 mask_rcnn_r50rla_fpn_1x_coco_395
Mask_R-CNN_ecanet50_rla 40.6 61.8 44.0 mask_rcnn_r50rlaeca_fpn_1x_coco_406 c08r mask_rcnn_r50rlaeca_fpn_1x_coco_406
Mask_R-CNN_resnet101_rla 41.8 62.3 46.2 mask_rcnn_r101rla_fpn_1x_coco_418 8bsn mask_rcnn_r101rla_fpn_1x_coco_418
Mask_R-CNN_ecanet101_rla 42.9 63.6 46.9 mask_rcnn_r101rlaeca_fpn_1x_coco_429 3kmz mask_rcnn_r101rlaeca_fpn_1x_coco_429

Citation

@misc{zhao2021recurrence,
      title={Recurrence along Depth: Deep Convolutional Neural Networks with Recurrent Layer Aggregation}, 
      author={Jingyu Zhao and Yanwen Fang and Guodong Li},
      year={2021},
      eprint={2110.11852},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Questions

Please contact '[email protected]' or '[email protected]'.

Owner
Joy Fang
Joy Fang
Another pytorch implementation of FCN (Fully Convolutional Networks)

FCN-pytorch-easiest Trying to be the easiest FCN pytorch implementation and just in a get and use fashion Here I use a handbag semantic segmentation f

Y. Dong 158 Dec 21, 2022
FastFace: Lightweight Face Detection Framework

Light Face Detection using PyTorch Lightning

Ömer BORHAN 75 Dec 05, 2022
Official PyTorch implementation of "Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble" (NeurIPS'21)

Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble This is the code for reproducing the results of the paper Uncertainty-Bas

43 Nov 23, 2022
Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering (NAACL 2021)

Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering Abstract In open-domain question answering (QA), retrieve-and-read mec

Clova AI Research 34 Apr 13, 2022
Official pytorch implementation of paper Dual-Level Collaborative Transformer for Image Captioning (AAAI 2021).

Dual-Level Collaborative Transformer for Image Captioning This repository contains the reference code for the paper Dual-Level Collaborative Transform

lyricpoem 160 Dec 11, 2022
Malmo Collaborative AI Challenge - Team Pig Catcher

The Malmo Collaborative AI Challenge - Team Pig Catcher Approach The challenge involves 2 agents who can either cooperate or defect. The optimal polic

Kai Arulkumaran 66 Jun 29, 2022
This folder contains the implementation of the multi-relational attribute propagation algorithm.

MrAP This folder contains the implementation of the multi-relational attribute propagation algorithm. It requires the package pytorch-scatter. Please

6 Dec 06, 2022
[ICLR 2021] "CPT: Efficient Deep Neural Network Training via Cyclic Precision" by Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin

CPT: Efficient Deep Neural Network Training via Cyclic Precision Yonggan Fu, Han Guo, Meng Li, Xin Yang, Yining Ding, Vikas Chandra, Yingyan Lin Accep

26 Oct 25, 2022
EPSANet:An Efficient Pyramid Split Attention Block on Convolutional Neural Network

EPSANet:An Efficient Pyramid Split Attention Block on Convolutional Neural Network This repo contains the official Pytorch implementaion code and conf

Hu Zhang 175 Jan 07, 2023
Deploy optimized transformer based models on Nvidia Triton server

🤗 Hugging Face Transformer submillisecond inference 🤯 and deployment on Nvidia Triton server Yes, you can perfom inference with transformer based mo

Lefebvre Sarrut Services 1.2k Jan 05, 2023
Manage the availability of workspaces within Frappe/ ERPNext (sidebar) based on user-roles

Workspace Permissions Manage the availability of workspaces within Frappe/ ERPNext (sidebar) based on user-roles. Features Configure foreach workspace

Patrick.St. 18 Sep 26, 2022
Learning View Priors for Single-view 3D Reconstruction (CVPR 2019)

Learning View Priors for Single-view 3D Reconstruction (CVPR 2019) This is code for a paper Learning View Priors for Single-view 3D Reconstruction by

Hiroharu Kato 38 Aug 17, 2022
A lightweight library designed to accelerate the process of training PyTorch models by providing a minimal

A lightweight library designed to accelerate the process of training PyTorch models by providing a minimal, but extensible training loop which is flexible enough to handle the majority of use cases,

Chris Hughes 110 Dec 23, 2022
(NeurIPS 2020) Wasserstein Distances for Stereo Disparity Estimation

Wasserstein Distances for Stereo Disparity Estimation Accepted in NeurIPS 2020 as Spotlight. [Project Page] Wasserstein Distances for Stereo Disparity

Divyansh Garg 92 Dec 12, 2022
Tiny-NewsRec: Efficient and Effective PLM-based News Recommendation

Tiny-NewsRec The source codes for our paper "Tiny-NewsRec: Efficient and Effective PLM-based News Recommendation". Requirements PyTorch == 1.6.0 Tensor

Yang Yu 3 Dec 07, 2022
Pairwise learning neural link prediction for ogb link prediction

Pairwise Learning for Neural Link Prediction for OGB (PLNLP-OGB) This repository provides evaluation codes of PLNLP for OGB link property prediction t

Zhitao WANG 31 Oct 10, 2022
A curated list of awesome resources combining Transformers with Neural Architecture Search

A curated list of awesome resources combining Transformers with Neural Architecture Search

Yash Mehta 173 Jan 03, 2023
Code for the USENIX 2017 paper: kAFL: Hardware-Assisted Feedback Fuzzing for OS Kernels

kAFL: Hardware-Assisted Feedback Fuzzing for OS Kernels Blazing fast x86-64 VM kernel fuzzing framework with performant VM reloads for Linux, MacOS an

Chair for Sys­tems Se­cu­ri­ty 541 Nov 27, 2022
Custom Implementation of Non-Deep Networks

ParNet Custom Implementation of Non-deep Networks arXiv:2110.07641 Ankit Goyal, Alexey Bochkovskiy, Jia Deng, Vladlen Koltun Official Repository https

Pritama Kumar Nayak 20 May 27, 2022
Differentiable Simulation of Soft Multi-body Systems

Differentiable Simulation of Soft Multi-body Systems Yi-Ling Qiao, Junbang Liang, Vladlen Koltun, Ming C. Lin [Paper] [Code] Updates The C++ backend s

YilingQiao 26 Dec 23, 2022