Official PyTorch Implementation of HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning (NeurIPS 2021 Spotlight)

Overview

[NeurIPS 2021 Spotlight] HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning [Paper]

This is Official PyTorch implementation for HELP: Hardware-adaptive Efficient Latency Prediction for NAS via Meta-Learning.

@inproceedings{lee2021help,
    title     = {HELP: Hardware-Adaptive Efficient Latency Prediction for NAS via Meta-Learning},
    author    = {Lee, Hayeon and Lee, Sewoong and Chong, Song and Hwang, Sung Ju},
    booktitle = {Advances in Neural Information Processing Systems (NeurIPS)},
    year      = {2021}
} 

Overview

For deployment, neural architecture search should be hardware-aware, in order to satisfy the device-specific constraints (e.g., memory usage, latency and energy consumption) and enhance the model efficiency. Existing methods on hardware-aware NAS collect a large number of samples (e.g., accuracy and latency) from a target device, either builds a lookup table or a latency estimator. However, such approach is impractical in real-world scenarios as there exist numerous devices with different hardware specifications, and collecting samples from such a large number of devices will require prohibitive computational and monetary cost. To overcome such limitations, we propose Hardware-adaptive Efficient Latency Predictor (HELP), which formulates the device-specific latency estimation problem as a meta-learning problem, such that we can estimate the latency of a model's performance for a given task on an unseen device with a few samples. To this end, we introduce novel hardware embeddings to embed any devices considering them as black-box functions that output latencies, and meta-learn the hardware-adaptive latency predictor in a device-dependent manner, using the hardware embeddings. We validate the proposed HELP for its latency estimation performance on unseen platforms, on which it achieves high estimation performance with as few as 10 measurement samples, outperforming all relevant baselines. We also validate end-to-end NAS frameworks using HELP against ones without it, and show that it largely reduces the total time cost of the base NAS method, in latency-constrained settings.

Prerequisites

  • Python 3.8 (Anaconda)
  • PyTorch 1.8.1
  • CUDA 10.2

Hardware spec used for meta-training the proposed HELP model

  • GPU: A single Nvidia GeForce RTX 2080Ti
  • CPU: Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz

Installation

$ conda create --name help python=3.8
$ conda activate help
$ conda install pytorch==1.8.1 torchvision cudatoolkit=10.2 -c pytorch
$ pip install nas-bench-201
$ pip install tqdm
$ conda install scipy
$ conda install pyyaml
$ conda install tensorboard

Contents

1. Experiments on NAS-Bench-201 Search Space

2. Experiments on FBNet Search Space

3. Experiments on OFA Search Space

4. Experiments on HAT Search Space

1. Reproduce Main Results on NAS-Bench-201 Search Space

We provide the code to reproduce the main results on NAS-Bench-201 search space as follows:

  • Computing architecture ranking correlation between latencies estimated by HELP and true measured latencies on unseen devices (Table 3).
  • Latency-constrained NAS Results with MetaD2A + HELP on unseen devices (Table 4).
  • Meta-Training HELP model.

1.1. Data Preparation and Model Checkpoint

We include all required datasets and checkpoints in this github repository.

1.2. [Meta-Test] Architecture ranking correlation

You can compute architecture ranking correlation between latencies estimated by HELP and true measured latencies on unseen devices on NAS-Bench-201 search space (Table 3):

$ python main.py --search_space nasbench201 \
		 --mode 'meta-test' \
		 --num_samples 10 \
		 --num_meta_train_sample 900 \
                 --load_path [Path of Checkpoint File] \
		 --meta_train_devices '1080ti_1,1080ti_32,1080ti_256,silver_4114,silver_4210r,samsung_a50,pixel3,essential_ph_1,samsung_s7' \
		 --meta_valid_devices 'titanx_1,titanx_32,titanx_256,gold_6240' \                 
                 --meta_test_devices 'titan_rtx_256,gold_6226,fpga,pixel2,raspi4,eyeriss' 

You can use checkpoint file provided by this git repository ./data/nasbench201/checkpoint/help_max_corr.pt as follows:

$ python main.py --search_space nasbench201 \
		 --mode 'meta-test' \
		 --num_samples 10 \
		 --num_meta_train_sample 900 \
                 --load_path './data/nasbench201/checkpoint/help_max_corr.pt' \
		 --meta_train_devices '1080ti_1,1080ti_32,1080ti_256,silver_4114,silver_4210r,samsung_a50,pixel3,essential_ph_1,samsung_s7' \
		 --meta_valid_devices 'titanx_1,titanx_32,titanx_256,gold_6240' \                 
                 --meta_test_devices 'titan_rtx_256,gold_6226,fpga,pixel2,raspi4,eyeriss' 

or you can use provided script:

$ bash script/run_meta_test_nasbench201.sh [GPU_NUM]

Architecture Ranking Correlation Results (Table 3)

Method # of Training Samples
From Target Device
Desktop GPU
(Titan RTX Batch 256)
Desktop CPU
(Intel Gold 6226)
Mobile
Pixel2
Raspi4 ASIC FPGA Mean
FLOPS - 0.950 0.826 0.765 0.846 0.437 0.900 0.787
Layer-wise Predictor - 0.667 0.866 - - - - 0.767
BRP-NAS 900 0.814 0.796 0.666 0.847 0.811 0.801 0.789
BRP-NAS
(+extra samples)
3200 0.822 0.805 0.693 0.853 0.830 0.828 0.805
HELP (Ours) 10 0.987 0.989 0.802 0.890 0.940 0.985 0.932

1.3. [Meta-Test] Efficient Latency-constrained NAS combined with MetaD2A

You can reproduce latency-constrained NAS results with MetaD2A + HELP on unseen devices on NAS-Bench-201 search space (Table 4):

$ python main.py --search_space nasbench201 --mode 'nas' \
                 --load_path [Path of Checkpoint File] \
                 --sampled_arch_path 'data/nasbench201/arch_generated_by_metad2a.txt' \
                 --nas_target_device [Device] \ 
                 --latency_constraint [Latency Constraint] 

For example, if you use checkpoint file provided by this git repository, then path of checkpoint file is ./data/nasbench201/checkpoint/help_max_corr.pt, if you set target device as CPU Intel Gold 6226 (gold_6226) with batch size 256 and target latency constraint as 11.0 (ms), command is as follows:

$ python main.py --search_space nasbench201 --mode 'nas' \
                 --load_path './data/nasbench201/checkpoint/help_max_corr.pt' \
                 --sampled_arch_path 'data/nasbench201/arch_generated_by_metad2a.txt' \
                 --nas_target_device gold_6226 \ 
                 --latency_constraint 11.0 

or you can use provided script:

$ bash script/run_nas_metad2a.sh [GPU_NUM]

Efficient Latency-constrained NAS Results (Table 4)

Device # of Training Samples
from Target Device
Latency
Constraint (ms)
Latency
(ms)
Accuracy
(%)
Neural Architecture
Config
GPU Titan RTX
(Batch 256)
titan_rtx_256
10 18.0
21.0
25.0
17.8
18.9
24.2
69.7
71.5
71.8
link
link
link
CPU Intel Gold 6226
gold_6226
10 8.0
11.0
14.0
8.0
10.7
14.3
67.3
70.2
72.1
link
link
link
Mobile Pixel2
pixel2
10 14.0
18.0
22.0
13.0
19.0
25.0
69.7
71.8
73.2
link
link
link
ASIC-Eyeriss
eyeriss
10 5.0
7.0
9.0
3.9
5.1
9.1
71.5
71.8
73.5
link
link
link
FPGA
fpga
10 4.0
5.0
6.0
3.8
4.7
7.4
70.2
71.8
73.5
link
link
link

1.4. Meta-Training HELP model

Note that this process is performed only once for all NAS results.

$ python main.py --search_space nasbench201 \
                 --mode 'meta-train' \
                 --num_samples 10 \
                 --num_meta_train_sample 900 \
                 --meta_train_devices '1080ti_1,1080ti_32,1080ti_256,silver_4114,silver_4210r,samsung_a50,pixel3,essential_ph_1,samsung_s7' \
                 --meta_valid_devices 'titanx_1,titanx_32,titanx_256,gold_6240' \           
                 --meta_test_devices 'titan_rtx_256,gold_6226,fpga,pixel2,raspi4,eyeriss' \
                 --exp_name [EXP_NAME] \
                 --seed 3 # e.g.) 1, 2, 3

or you can use provided script:

$ bash script/run_meta_training_nasbench201.sh [GPU_NUM]

The results (checkpoint file, log file etc) are saved in

./results/nasbench201/[EXP_NAME]

2. Reproduce Main Results on FBNet Search Space

We provide the code to reproduce the main results on FBNet search space as follows:

  • Computing architecture ranking correlation between latencies estimated by HELP and true measured latencies on unseen devices (Table 2).
  • Meta-Training HELP model.

2.1. Data Preparation and Model Checkpoint

We include all required datasets and checkpoints in this github repository.

2.2. [Meta-Test] Architecture ranking correlation

You can compute architecture ranking correlation between latencies estimated by HELP and true measured latencies on unseen devices on FBNet search space (Table 2):

$ python main.py --search_space fbnet \
	--mode 'meta-test' \
	--num_samples 10 \
	--num_episodes 4000 \
	--num_meta_train_sample 4000 \
	--load_path './data/fbnet/checkpoint/help_max_corr.pt' \
	--meta_train_devices '1080ti_1,1080ti_32,1080ti_64,silver_4114,silver_4210r,samsung_a50,pixel3,essential_ph_1,samsung_s7' \
	--meta_valid_devices 'titanx_1,titanx_32,titanx_64,gold_6240' \
	--meta_test_devices 'fpga,raspi4,eyeriss'

or you can use provided script:

$ bash script/run_meta_test_fbnet.sh [GPU_NUM]

Architecture Ranking Correlation Results (Table 2)

Method Raspi4 ASIC FPGA Mean
MAML 0.718 0.763 0.727 0.736
Meta-SGD 0.821 0.822 0.776 0.806
HELP (Ours) 0.887 0.943 0.892 0.910

2.3. Meta-Training HELP model

Note that this process is performed only once for all results.

$ python main.py --search_space fbnet \
	--mode 'meta-train' \
	--num_samples 10 \
	--num_episodes 4000 \
	--num_meta_train_sample 4000 \
	--exp_name [EXP_NAME] \
	--meta_train_devices '1080ti_1,1080ti_32,1080ti_64,silver_4114,silver_4210r,samsung_a50,pixel3,essential_ph_1,samsung_s7' \
	--meta_valid_devices 'titanx_1,titanx_32,titanx_64,gold_6240' \
	--meta_test_devices 'fpga,raspi4,eyeriss' \
	--seed 3 # e.g.) 1, 2, 3

or you can use provided script:

$ bash script/run_meta_training_fbnet.sh [GPU_NUM]

The results (checkpoint file, log file etc) are saved in

./results/fbnet/[EXP_NAME]

3. Reproduce Main Results on OFA Search Space

We provide the code to reproduce the main results on OFA search space as follows:

  • Latency-constrained NAS Results with accuracy predictor of OFA + HELP on unseen devices (Table 5).
  • Validating obatined neural architecture on ImageNet-1K.
  • Meta-Training HELP model.

3.1. Data Preparation and Model Checkpoint

We include required datasets except ImageNet-1K, and checkpoints in this github repository. To validate obatined neural architecture on ImageNet-1K, you should download ImageNet-1K (2012 ver.)

3.2. [Meta-Test] Efficient Latency-constrained NAS combined with accuracy predictor of OFA

You can reproduce latency-constrained NAS results with OFA + HELP on unseen devices on OFA search space (Table 5):

python main.py \
	--search_space ofa \
	--mode nas \
	--num_samples 10 \
	--seed 3 \
	--num_meta_train_sample 4000 \
	--load_path './data/ofa/checkpoint/help_max_corr.pt' \
	--nas_target_device [DEVICE_NAME] \
	--latency_constraint [LATENCY_CONSTRAINT] \
	--exp_name 'nas' \
	--meta_train_devices '2080ti_1,2080ti_32,2080ti_64,titan_xp_1,titan_xp_32,titan_xp_64,v100_1,v100_32,v100_64' \
	--meta_valid_devices 'titan_rtx_1,titan_rtx_32' \
	--meta_test_devices 'titan_rtx_64' 

For example,

$ python main.py \
	--search_space ofa \
	--mode nas \
	--num_samples 10 \
	--seed 3 \
	--num_meta_train_sample 4000 \
	--load_path './data/ofa/checkpoint/help_max_corr.pt' \
	--nas_target_device titan_rtx_64 \
	--latency_constraint 20 \
	--exp_name 'nas' \
	--meta_train_devices '2080ti_1,2080ti_32,2080ti_64,titan_xp_1,titan_xp_32,titan_xp_64,v100_1,v100_32,v100_64' \
	--meta_valid_devices 'titan_rtx_1,titan_rtx_32' \
	--meta_test_devices 'titan_rtx_64' 

or you can use provided script:

$ bash script/run_nas_ofa.sh [GPU_NUM]

Efficient Latency-constrained NAS Results (Table 5)

Device Sample from
Target Device
Latency
Constraint (ms)
Latency
(ms)
Accuracy
(%)
Architecture
config
GPU Titan RTX
(Batch 64)
10 20
23
28
20.3
23.1
28.6
76.0
76.8
77.9
link
link
link
CPU Intel Gold 6226 20 170
190
147
171
77.6
78.1
link
link
Jetson AGX Xavier 10 65
70
67.4
76.4
75.9
76.4
link
link

3.3. Validating obtained neural architecture on ImageNet-1K

$ python validate_imagenet.py \
		--config_path [Path of neural architecture config file]
		--imagenet_save_path [Path of ImageNet 1k]

for example,

$ python validate_imagenet.py \
		--config_path 'data/ofa/architecture_config/gpu_titan_rtx_64/latency_28.6ms_accuracy_77.9.json' \
		--imagenet_save_path './ILSVRC2012'

3.4. Meta-training HELP model

Note that this process is performed only once for all results.

$ python main.py --search_space ofa \
		--mode 'meta-train' \
		--num_samples 10 \
		--num_meta_train_sample 4000 \
		--exp_name [EXP_NAME] \
                --meta_train_devices '2080ti_1,2080ti_32,2080ti_64,titan_xp_1,titan_xp_32,titan_xp_64,v100_1,v100_32,v100_64' \
                --meta_valid_devices 'titan_rtx_1,titan_rtx_32' \
                --meta_test_devices 'titan_rtx_64' \
		--seed 3 # e.g.) 1, 2, 3

or you can use provided script:

$ bash script/run_meta_training_ofa.sh [GPU_NUM]

4. Main Results on HAT Search Space

We provide the neural architecture configurations to reproduce the results of machine translation (WMT'14 En-De Task) on HAT search space.

Efficient Latency-constrained NAS Results

Task Device Samples from
Target Device
Latency BLEU score Architecture
Config
WMT'14 En-De GPU NVIDIA Titan RTX 10 74.0ms
106.5ms
27.19
27.44
link
link
WMT'14 En-De CPU Intel Xeon Gold 6240 10 159.6ms
343.2ms
27.20
27.52
link
link

You can test models by BLEU score and Computing Latency.

Reference

Model-Agnostic Meta-Learning for Fast Adaptation of Deep Networks (ICML17)

Meta-SGD: Learning to Learn Quickly for Few-Shot Learning

Once-for-All: Train One Network and Specialize it for Efficient Deployment (ICLR20)

NAS-Bench-201: Extending the Scope of Reproducible Neural Architecture Search (ICLR20)

BRP-NAS: Prediction-based NAS using GCNs (NeurIPS20)

HAT: Hardware Aware Transformers for Efficient Natural Language Processing (ACL20)

Rapid Neural Architecture Search by Learning to Generate Graphs from Datasets (ICLR21)

HW-NAS-Bench: Hardware-Aware Neural Architecture Search Benchmark (ICLR21)

Owner
Ph.D. student @ School of Computing, Korea Advanced Institute of Science and Technology (KAIST)
This repository contains the code for the CVPR 2021 paper "GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields"

GIRAFFE: Representing Scenes as Compositional Generative Neural Feature Fields Project Page | Paper | Supplementary | Video | Slides | Blog | Talk If

1.1k Dec 30, 2022
The ICS Chat System project for NYU Shanghai Fall 2021

ICS_Chat_System [Catenger] This is the ICS Chat System project for NYU Shanghai Fall 2021 Creators: Shavarsh Melikyan, Skyler Chen and Arghya Sarkar,

1 Dec 20, 2021
S2-BNN: Bridging the Gap Between Self-Supervised Real and 1-bit Neural Networks via Guided Distribution Calibration (CVPR 2021)

S2-BNN (Self-supervised Binary Neural Networks Using Distillation Loss) This is the official pytorch implementation of our paper: "S2-BNN: Bridging th

Zhiqiang Shen 52 Dec 24, 2022
Evaluating Privacy-Preserving Machine Learning in Critical Infrastructures: A Case Study on Time-Series Classification

PPML-TSA This repository provides all code necessary to reproduce the results reported in our paper Evaluating Privacy-Preserving Machine Learning in

Dominik 1 Mar 08, 2022
Official code for "InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization" (ICLR 2020, spotlight)

InfoGraph: Unsupervised and Semi-supervised Graph-Level Representation Learning via Mutual Information Maximization Authors: Fan-yun Sun, Jordan Hoffm

Fan-Yun Sun 232 Dec 28, 2022
Neural Re-rendering for Full-frame Video Stabilization

NeRViS: Neural Re-rendering for Full-frame Video Stabilization Project Page | Video | Paper | Google Colab Setup Setup environment for [Yu and Ramamoo

Yu-Lun Liu 9 Jun 17, 2022
PenguinSpeciesPredictionML - Basic model to predict Penguin species based on beak size and sex.

Penguin Species Prediction (ML) 🐧 👨🏽‍💻 What? 💻 This project is a basic model using sklearn methods to predict Penguin species based on beak size

Tucker Paron 0 Jan 08, 2022
Code for: Gradient-based Hierarchical Clustering using Continuous Representations of Trees in Hyperbolic Space. Nicholas Monath, Manzil Zaheer, Daniel Silva, Andrew McCallum, Amr Ahmed. KDD 2019.

gHHC Code for: Gradient-based Hierarchical Clustering using Continuous Representations of Trees in Hyperbolic Space. Nicholas Monath, Manzil Zaheer, D

Nicholas Monath 35 Nov 16, 2022
Monocular Depth Estimation - Weighted-average prediction from multiple pre-trained depth estimation models

merged_depth runs (1) AdaBins, (2) DiverseDepth, (3) MiDaS, (4) SGDepth, and (5) Monodepth2, and calculates a weighted-average per-pixel absolute dept

Pranav 39 Nov 21, 2022
Code for STFT Transformer used in BirdCLEF 2021 competition.

STFT_Transformer Code for STFT Transformer used in BirdCLEF 2021 competition. The STFT Transformer is a new way to use Transformers similar to Vision

Jean-François Puget 69 Sep 29, 2022
Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series Forecasting.

Non-AR Spatial-Temporal Transformer Introduction Implementation of the paper NAST: Non-Autoregressive Spatial-Temporal Transformer for Time Series For

Chen Kai 66 Nov 28, 2022
Make a Turtlebot3 follow a figure 8 trajectory and create a robot arm and make it follow a trajectory

HW2 - ME 495 Overview Part 1: Makes the robot move in a figure 8 shape. The robot starts moving when launched on a real turtlebot3 and can be paused a

Devesh Bhura 0 Oct 21, 2022
SysWhispers Shellcode Loader

Shhhloader Shhhloader is a SysWhispers Shellcode Loader that is currently a Work in Progress. It takes raw shellcode as input and compiles a C++ stub

icyguider 630 Jan 03, 2023
ViDT: An Efficient and Effective Fully Transformer-based Object Detector

ViDT: An Efficient and Effective Fully Transformer-based Object Detector by Hwanjun Song1, Deqing Sun2, Sanghyuk Chun1, Varun Jampani2, Dongyoon Han1,

NAVER AI 262 Dec 27, 2022
An example project demonstrating how the Autonomous Learning Library can be used to build new reinforcement learning agents.

About This repository shows how Autonomous Learning Library can be used to build new reinforcement learning agents. In particular, it contains a model

Chris Nota 5 Aug 30, 2022
Official implementation for “Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior”

Unsupervised Low-Light Image Enhancement via Histogram Equalization Prior. The code will release soon. Implementation Python3 PyTorch=1.0 NVIDIA GPU+

FengZhang 34 Dec 04, 2022
Analysis code and Latex source of the manuscript describing the conditional permutation test of confounding bias in predictive modelling.

Git repositoty of the manuscript entitled Statistical quantification of confounding bias in predictive modelling by Tamas Spisak The manuscript descri

PNI - Predictive Neuroimaging Lab, University Hospital Essen, Germany 0 Nov 22, 2021
Learning to Self-Train for Semi-Supervised Few-Shot

Learning to Self-Train for Semi-Supervised Few-Shot Classification This repository contains the TensorFlow implementation for NeurIPS 2019 Paper "Lear

86 Dec 29, 2022
Official Pytorch Implementation of Unsupervised Image Denoising with Frequency Domain Knowledge

Unsupervised Image Denoising with Frequency Domain Knowledge (BMVC 2021 Oral) : Official Project Page This repository provides the official PyTorch im

Donggon Jang 12 Sep 26, 2022
Implementation of "A MLP-like Architecture for Dense Prediction"

A MLP-like Architecture for Dense Prediction (arXiv) Updates (22/07/2021) Initial release. Model Zoo We provide CycleMLP models pretrained on ImageNet

Shoufa Chen 244 Dec 27, 2022