Learning Versatile Neural Architectures by Propagating Network Codes

Related tags

Deep LearningNCP
Overview

Learning Versatile Neural Architectures by Propagating Network Codes

Mingyu Ding, Yuqi Huo, Haoyu Lu, Linjie Yang, Zhe Wang, Zhiwu Lu, Jingdong Wang, Ping Luo

diagram

Introduction

This work includes:
(1) NAS-Bench-MR, a NAS benchmark built on four challenging datasets under practical training settings for learning task-transferable architectures.
(2) An efficient predictor-based algorithm Network Coding Propagation (NCP), which back-propagates the gradients of neural predictors to directly update architecture codes along desired gradient directions for various objectives.

This framework is implemented and tested with Ubuntu/Mac OS, CUDA 9.0/10.0, Python 3, Pytorch 1.3-1.6, NVIDIA Tesla V100/CPU.

Dataset

We build our benchmark on four computer vision tasks, i.e., image classification (ImageNet), semantic segmentation (CityScapes), 3D detection (KITTI), and video recognition (HMDB51). Totally 9 different settings are included, as shown in the data/*/trainval.pkl folders.

Note that each .pkl file contains more than 2500 architectures, and their corresponding evaluation results under multiple metrics. The original training logs and checkpoints (including model weights and optimizer data) will be uploaded to Google drive (more than 4T). We will share the download link once the upload is complete.

Quick start

First, train the predictor

python3 tools/train_predictor.py  # --cfg configs/seg.yaml

Then, edit architecture based on desired gradients

python3 tools/ncp.py  # --cfg configs/seg.yaml

Examples

  • An example in NAS-Bench-MR (Seg):
{'mIoU': 70.57,
 'mAcc': 80.07,
 'aAcc': 95.29,
 'input_channel': [16, 64],
 # [num_branches, [num_convs], [num_channels]]
 'network_setting': [[1, [3], [128]],
  [2, [3, 3], [32, 48]],
  [2, [3, 3], [32, 48]],
  [2, [3, 3], [32, 48]],
  [3, [2, 3, 2], [16, 32, 16]],
  [3, [2, 3, 2], [16, 32, 16]],
  [4, [2, 4, 1, 1], [96, 112, 48, 80]]],
 'last_channel': 112,
 # [num_branches, num_block1, num_convs1, num_channels1, ..., num_block4, num_convs4, num_channels4, last_channel]
 'embedding': [16, 64, 1, 3, 128, 3, 3, 3, 32, 48, 2, 2, 3, 2, 16, 32, 16, 1, 2, 4, 1, 1, 96, 112, 48, 80]
}
  • Load Datasets:
import pickle
exps = pickle.load(open('data/seg/trainval.pkl', 'rb'))
# Then process each item in exps
  • Load Model / Get Params and Flops (based on the thop library):
import torch
from thop import profile
from models.supernet import MultiResolutionNet

# Get model using input_channel & network_setting & last_channel
model = MultiResolutionNet(input_channel=[16, 64],
                           network_setting=[[1, [3], [128]],
                            [2, [3, 3], [32, 48]],
                            [2, [3, 3], [32, 48]],
                            [2, [3, 3], [32, 48]],
                            [3, [2, 3, 2], [16, 32, 16]],
                            [3, [2, 3, 2], [16, 32, 16]],
                            [4, [2, 4, 1, 1], [96, 112, 48, 80]]],
                          last_channel=112)

# Get Flops and Parameters
input = torch.randn(1, 3, 224, 224)
macs, params = profile(model, inputs=(input, ))  

structure

Data Format

Each code in data/search_list.txt denotes an architecture. It can be load in our supernet as follows:

  • Code2Setting
params = '96_128-1_1_1_48-1_2_1_1_128_8-1_3_1_1_1_128_128_120-4_4_4_4_4_4_128_128_128_128-64'
embedding = [int(item) for item in params.replace('-', '_').split('_')]

embedding = [ 96, 128,   1,   1,  48,   1,   1,   1, 128,   8,   1,   1,
           1,   1, 128, 128, 120,   4,   4,   4,   4,   4, 128, 128,
         128, 128, 64]
input_channels = embedding[0:2]
block_1 = embedding[2:3] + [1] + embedding[3:5]
block_2 = embedding[5:6] + [2] + embedding[6:10]
block_3 = embedding[10:11] + [3] + embedding[11:17]
block_4 = embedding[17:18] + [4] + embedding[18:26]
last_channels = embedding[26:27]
network_setting = []
for item in [block_1, block_2, block_3, block_4]:
    for _ in range(item[0]):
        network_setting.append([item[1], item[2:-int(len(item) / 2 - 1)], item[-int(len(item) / 2 - 1):]])

# network_setting = [[1, [1], [48]], 
#  [2, [1, 1], [128, 8]],
#  [3, [1, 1, 1], [128, 128, 120]], 
#  [4, [4, 4, 4, 4], [128, 128, 128, 128]], 
#  [4, [4, 4, 4, 4], [128, 128, 128, 128]], 
#  [4, [4, 4, 4, 4], [128, 128, 128, 128]], 
#  [4, [4, 4, 4, 4], [128, 128, 128, 128]]]
# input_channels = [96, 128]
# last_channels = [64]
  • Setting2Code
input_channels = [str(item) for item in input_channels]
block_1 = [str(item) for item in block_1]
block_2 = [str(item) for item in block_2]
block_3 = [str(item) for item in block_3]
block_4 = [str(item) for item in block_4]
last_channels = [str(item) for item in last_channels]

params = [input_channels, block_1, block_2, block_3, block_4, last_channels]
params = ['_'.join(item) for item in params]
params = '-'.join(params)
# params
# 96_128-1_1_1_48-1_2_1_1_128_8-1_3_1_1_1_128_128_120-4_4_4_4_4_4_128_128_128_128-64'

License

For academic use, this project is licensed under the 2-clause BSD License. For commercial use, please contact the author.

Owner
Mingyu Ding
Mingyu Ding
Code for the ICASSP-2021 paper: Continuous Speech Separation with Conformer.

Continuous Speech Separation with Conformer Introduction We examine the use of the Conformer architecture for continuous speech separation. Conformer

Sanyuan Chen (ι™ˆδΈ‰ε…ƒ) 81 Nov 28, 2022
Evaluation Pipeline for our ECCV2020: Journey Towards Tiny Perceptual Super-Resolution.

Journey Towards Tiny Perceptual Super-Resolution Test code for our ECCV2020 paper: https://arxiv.org/abs/2007.04356 Our x4 upscaling pre-trained model

Royson 6 Mar 30, 2022
A Pytorch Implementation of ClariNet

ClariNet A Pytorch Implementation of ClariNet (Mel Spectrogram -- Waveform) Requirements PyTorch 0.4.1 & python 3.6 & Librosa Examples Step 1. Downlo

Sungwon Kim 286 Sep 15, 2022
[IJCAI'21] Deep Automatic Natural Image Matting

Deep Automatic Natural Image Matting [IJCAI-21] This is the official repository of the paper Deep Automatic Natural Image Matting. Introduction | Netw

Jizhizi_Li 316 Jan 06, 2023
Erpnext app for make employee salary on payroll entry based on one or more project with percentage for all project equal 100 %

Project Payroll this app for make payroll for employee based on projects like project on 30 % and project 2 70 % as account dimension it makes genral

Ibrahim Morghim 8 Jan 02, 2023
A multi-mode modulator for multi-domain few-shot classification (ICCV)

A multi-mode modulator for multi-domain few-shot classification (ICCV)

Yanbin Liu 8 Apr 28, 2022
This repository provides a basic implementation of our GCPR 2021 paper "Learning Conditional Invariance through Cycle Consistency"

Learning Conditional Invariance through Cycle Consistency This repository provides a basic TensorFlow 1 implementation of the proposed model in our GC

BMDA - University of Basel 1 Nov 04, 2022
scikit-learn: machine learning in Python

scikit-learn is a Python module for machine learning built on top of SciPy and is distributed under the 3-Clause BSD license. The project was started

scikit-learn 52.5k Jan 08, 2023
[CVPR 2021] 'Searching by Generating: Flexible and Efficient One-Shot NAS with Architecture Generator'

[CVPR2021] Searching by Generating: Flexible and Efficient One-Shot NAS with Architecture Generator Overview This is the entire codebase for the paper

35 Dec 01, 2022
PyExplainer: A Local Rule-Based Model-Agnostic Technique (Explainable AI)

PyExplainer PyExplainer is a local rule-based model-agnostic technique for generating explanations (i.e., why a commit is predicted as defective) of J

AI Wizards for Software Management (AWSM) Research Group 14 Nov 13, 2022
A script written in Python that returns a consensus string and profile matrix of a given DNA string(s) in FASTA format.

A script written in Python that returns a consensus string and profile matrix of a given DNA string(s) in FASTA format.

Zain 1 Feb 01, 2022
PyTorch implementation of our paper: Decoupling and Recoupling Spatiotemporal Representation for RGB-D-based Motion Recognition

Decoupling and Recoupling Spatiotemporal Representation for RGB-D-based Motion Recognition, arxiv This is a PyTorch implementation of our paper. 1. Re

DamoCV 11 Nov 19, 2022
A curated list of automated deep learning (including neural architecture search and hyper-parameter optimization) resources.

Awesome AutoDL A curated list of automated deep learning related resources. Inspired by awesome-deep-vision, awesome-adversarial-machine-learning, awe

D-X-Y 2k Dec 30, 2022
Codes to calculate solar-sensor zenith and azimuth angles directly from hyperspectral images collected by UAV. Works only for UAVs that have high resolution GNSS/IMU unit.

UAV Solar-Sensor Angle Calculation Table of Contents About The Project Built With Getting Started Prerequisites Installation Datasets Contributing Lic

Sourav Bhadra 1 Jan 15, 2022
High-performance moving least squares material point method (MLS-MPM) solver.

High-Performance MLS-MPM Solver with Cutting and Coupling (CPIC) (MIT License) A Moving Least Squares Material Point Method with Displacement Disconti

Yuanming Hu 2.2k Dec 31, 2022
Implementation of Multistream Transformers in Pytorch

Multistream Transformers Implementation of Multistream Transformers in Pytorch. This repository deviates slightly from the paper, where instead of usi

Phil Wang 47 Jul 26, 2022
[NeurIPS 2021] Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods

Large Scale Learning on Non-Homophilous Graphs: New Benchmarks and Strong Simple Methods Large Scale Learning on Non-Homophilous Graphs: New Benchmark

60 Jan 03, 2023
FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable Model from a Hybrid Dataset (CVPR2022)

FaceVerse FaceVerse: a Fine-grained and Detail-controllable 3D Face Morphable Model from a Hybrid Dataset Lizhen Wang, Zhiyuan Chen, Tao Yu, Chenguang

Lizhen Wang 219 Dec 28, 2022
OCR Post Correction for Endangered Language Texts

πŸ“Œ Coming soon: an update to the software including features from our paper on semi-supervised OCR post-correction, to be published in the Transaction

Shruti Rijhwani 96 Dec 31, 2022