An implementation for Neural Architecture Search with Random Labels (CVPR 2021 poster) on Pytorch.

Related tags

Deep LearningRLNAS
Overview

Neural Architecture Search with Random Labels(RLNAS)

Introduction

This project provides an implementation for Neural Architecture Search with Random Labels (CVPR 2021 poster) on Pytorch. Experiments are evaluated on multiple datasets (NAS-Bench-201 and ImageNet) and multiple search spaces (DARTS-like and MobileNet-like). RLNAS achieves comparable or even better results compared with state-of-the-art NAS methods such as PC-DARTS, Single Path One-Shot, even though the counterparts utilize full ground truth labels for searching. We hope our finding could inspire new understandings on the essential of NAS.

Requirements

  • Pytorch 1.4
  • Python3.5+

Search results

1.Results in NAS-Benchmark-201 search space

nas_201_results

2.Results in DARTS searh space

darts_search_sapce_results

Architeture visualization

1) Architecture searched on CIFAR-10

  • RLDARTS = Genotype(
    normal=[
    ('sep_conv_5x5', 0), ('sep_conv_3x3', 1),
    ('dil_conv_3x3', 0), ('sep_conv_5x5', 2),
    ('sep_conv_3x3', 0), ('dil_conv_5x5', 3),
    ('dil_conv_5x5', 1), ('dil_conv_3x3', 2)],
    normal_concat=[2, 3, 4, 5],
    reduce=[
    ('sep_conv_5x5', 0), ('dil_conv_3x3', 1),
    ('sep_conv_3x3', 0), ('sep_conv_5x5', 2),
    ('dil_conv_3x3', 1), ('sep_conv_3x3', 3),
    ('max_pool_3x3', 1), ('sep_conv_5x5', 2,)],
    reduce_concat=[2, 3, 4, 5])

  • Normal cell: architecture_searched_on_cifar10

  • Reduction cell: architecture_searched_on_cifar10

2) Architecture searched on ImageNet-1k without FLOPs constrain

  • RLDARTS = Genotype( normal=[
    ('sep_conv_3x3', 0), ('sep_conv_3x3', 1),
    ('sep_conv_3x3', 1), ('sep_conv_3x3', 2),
    ('sep_conv_3x3', 0), ('sep_conv_5x5', 1),
    ('sep_conv_3x3', 0), ('sep_conv_3x3', 1)],
    normal_concat=[2, 3, 4, 5],
    reduce=[
    ('sep_conv_3x3', 0), ('sep_conv_3x3', 1),
    ('sep_conv_5x5', 0), ('sep_conv_3x3', 2),
    ('sep_conv_5x5', 0), ('sep_conv_5x5', 2),
    ('sep_conv_3x3', 2), ('sep_conv_3x3', 4)],
    reduce_concat=[2, 3, 4, 5])

  • Normal cell: architecture_searched_on_imagenet_no_flops_constrain

  • Reduction cell: architecture_searched_on_cifar10

3) Architecture searched on ImageNet-1k with 600M FLOPs constrain

  • RLDARTS = Genotype(
    normal=[
    ('sep_conv_3x3', 0), ('sep_conv_3x3', 1),
    ('skip_connect', 1), ('sep_conv_3x3', 2),
    ('sep_conv_3x3', 1), ('sep_conv_3x3', 2),
    ('skip_connect', 0), ('sep_conv_3x3', 4)],
    normal_concat=[2, 3, 4, 5],
    reduce=[ ('sep_conv_3x3', 0), ('max_pool_3x3', 1),
    ('sep_conv_3x3', 0), ('skip_connect', 1),
    ('sep_conv_3x3', 0), ('dil_conv_3x3', 1),
    ('skip_connect', 0), ('sep_conv_3x3', 1)],
    reduce_concat=[2, 3, 4, 5])

  • Normal cell: architecture_searched_on_imagenet_no_flops_constrain

  • Reduction cell: architecture_searched_on_cifar10

3.Results in MobileNet search space

The MobileNet-like search space proposed in ProxylessNAS is adopted in this paper. The SuperNet contains 21 choice blocks and each block has 7 alternatives:6 MobileNet blocks (combination of kernel size {3,5,7} and expand ratio {3,6}) and ’skip-connect’.

mobilenet_search_sapce_results

Architeture visualization

mobilenet_search_sapce_results

Usage

  • RLNAS in NAS-Benchmark-201

1)enter the work directory

cd nas_bench_201

2)train supernet with random labels

bash ./scripts-search/algos/train_supernet.sh cifar10 0 1

3)evolution search with angle

bash ./scripts-search/algos/evolution_search_with_angle.sh cifar10 0 1

4)calculate correlation

bash ./scripts-search/algos/cal_correlation.sh cifar10 0 1
  • RLNAS in DARTS search space

1)enter the work directory

cd darts_search_space

search architecture on CIFAR-10

cd cifar10/rlnas/

or search architecture on ImageNet

cd imagenet/rlnas/

2)train supernet with random labels

cd train_supernet
bash run_train.sh

3)evolution search with angle

cd evolution_search
cp ../train_supernet/models/checkpoint_epoch_50.pth.tar ./model_and_data/
cp ../train_supernet/models/checkpoint_epoch_0.pth.tar ./model_and_data/
bash run_server.sh
bash run_test.sh

4)architeture evaluation

cd retrain_architetcure

add searched architecture to genotypes.py

bash run_retrain.sh
  • RLNAS in MobileNet search space

The conduct commands are almost the same steps like RLNAS in DARTS search space, excepth that you need run 'bash run_generate_flops_lookup_table.sh' before evolution search.

Note: setup a server for the distributed search

tmux new -s mq_server
sudo apt update
sudo apt install rabbitmq-server
sudo service rabbitmq-server start
sudo rabbitmqctl add_user test test
sudo rabbitmqctl set_permissions -p / test '.*' '.*' '.*'

Before search, please modify host and username in the config file evolution_search/config.py.

Citation

If you find that this project helps your research, please consider citing some of the following papers:

@article{zhang2021neural,
  title={Neural Architecture Search with Random Labels},
  author={Zhang, Xuanyang and Hou, Pengfei and Zhang, Xiangyu and Sun, Jian},
  booktitle={Proceedings of the IEEE conference on computer vision and pattern recognition},
  year={2021}
}
@inproceedings{hu2020angle,
  title={Angle-based search space shrinking for neural architecture search},
  author={Hu, Yiming and Liang, Yuding and Guo, Zichao and Wan, Ruosi and Zhang, Xiangyu and Wei, Yichen and Gu, Qingyi and Sun, Jian},
  booktitle={European Conference on Computer Vision},
  pages={119--134},
  year={2020},
  organization={Springer}
}
@inproceedings{guo2020single,
  title={Single path one-shot neural architecture search with uniform sampling},
  author={Guo, Zichao and Zhang, Xiangyu and Mu, Haoyuan and Heng, Wen and Liu, Zechun and Wei, Yichen and Sun, Jian},
  booktitle={European Conference on Computer Vision},
  pages={544--560},
  year={2020},
  organization={Springer}
}
Source code for The Power of Many: A Physarum Swarm Steiner Tree Algorithm

Physarum-Swarm-Steiner-Algo Source code for The Power of Many: A Physarum Steiner Tree Algorithm Code implements ideas from the following papers: Sher

Sheryl Hsu 2 Mar 28, 2022
Introduction to CPM

CPM CPM is an open-source program on large-scale pre-trained models, which is conducted by Beijing Academy of Artificial Intelligence and Tsinghua Uni

Tsinghua AI 136 Dec 23, 2022
“英特尔创新大师杯”深度学习挑战赛 赛道3:CCKS2021中文NLP地址相关性任务

ccks2021-track3 CCKS2021中文NLP地址相关性任务-赛道三-冠军方案 团队:我的加菲鱼- wodejiafeiyu 初赛第二/复赛第一/决赛第一 前言 19年开始,陆陆续续参加了一些比赛,拿到过一些top,比较懒一直都没分享过,这次比较幸运又拿了top1,打算分享下 分类的任务

shaochenjie 131 Dec 31, 2022
A tutorial on training a DarkNet YOLOv4 model for the CrowdHuman dataset

YOLOv4 CrowdHuman Tutorial This is a tutorial demonstrating how to train a YOLOv4 people detector using Darknet and the CrowdHuman dataset. Table of c

JK Jung 118 Nov 10, 2022
Realtime_Multi-Person_Pose_Estimation

Introduction Multi Person PoseEstimation By PyTorch Results Require Pytorch Installation git submodule init && git submodule update Demo Download conv

tensorboy 1.3k Jan 05, 2023
Machine learning framework for both deep learning and traditional algorithms

NeoML is an end-to-end machine learning framework that allows you to build, train, and deploy ML models. This framework is used by ABBYY engineers for

NeoML 704 Dec 27, 2022
Official code repository for ICCV 2021 paper: Gravity-Aware Monocular 3D Human Object Reconstruction

GraviCap Official code repository for ICCV 2021 paper: Gravity-Aware Monocular 3D Human Object Reconstruction. Gravity-Aware Monocular 3D Human-Object

Rishabh Dabral 15 Dec 09, 2022
📝 Wrapper library for text generation / language models at char and word level with RNN in TensorFlow

tensorlm Generate Shakespeare poems with 4 lines of code. Installation tensorlm is written in / for Python 3.4+ and TensorFlow 1.1+ pip3 install tenso

Kilian Batzner 63 May 22, 2021
OpenIPDM is a MATLAB open-source platform that stands for infrastructures probabilistic deterioration model

Open-Source Toolbox for Infrastructures Probabilistic Deterioration Modelling OpenIPDM is a MATLAB open-source platform that stands for infrastructure

CIVML 0 Jan 20, 2022
Code for the SIGGRAPH 2022 paper "DeltaConv: Anisotropic Operators for Geometric Deep Learning on Point Clouds."

DeltaConv [Paper] [Project page] Code for the SIGGRAPH 2022 paper "DeltaConv: Anisotropic Operators for Geometric Deep Learning on Point Clouds" by Ru

98 Nov 26, 2022
Pytorch implementation for M^3L

Learning to Generalize Unseen Domains via Memory-based Multi-Source Meta-Learning for Person Re-Identification (CVPR 2021) Introduction This is the Py

Yuyang Zhao 45 Dec 26, 2022
Text and code for the forthcoming second edition of Think Bayes, by Allen Downey.

Think Bayes 2 by Allen B. Downey The HTML version of this book is here. Think Bayes is an introduction to Bayesian statistics using computational meth

Allen Downey 1.5k Jan 08, 2023
A PyTorch Image-Classification With AlexNet And ResNet50.

PyTorch 图像分类 依赖库的下载与安装 在终端中执行 pip install -r -requirements.txt 完成项目依赖库的安装 使用方式 数据集的准备 STL10 数据集 下载:STL-10 Dataset 存储位置:将下载后的数据集中 train_X.bin,train_y.b

FYH 4 Feb 22, 2022
Source code for the NeurIPS 2021 paper "On the Second-order Convergence Properties of Random Search Methods"

Second-order Convergence Properties of Random Search Methods This repository the paper "On the Second-order Convergence Properties of Random Search Me

Adamos Solomou 0 Nov 13, 2021
PyTorch implementation of PSPNet

PSPNet with PyTorch Unofficial implementation of "Pyramid Scene Parsing Network" (https://arxiv.org/abs/1612.01105). This repository is just for caffe

Kazuto Nakashima 52 Nov 16, 2022
Python Implementation of Chess Playing AI with variable difficulty

Chess AI with variable difficulty level implemented using the MiniMax AB-Pruning Algorithm

Ali Imran 7 Feb 20, 2022
Tensorflow implementation of "BEGAN: Boundary Equilibrium Generative Adversarial Networks"

BEGAN in Tensorflow Tensorflow implementation of BEGAN: Boundary Equilibrium Generative Adversarial Networks. Requirements Python 2.7 or 3.x Pillow tq

Taehoon Kim 922 Dec 21, 2022
[NeurIPS'20] Multiscale Deep Equilibrium Models

Multiscale Deep Equilibrium Models 💥 💥 💥 💥 This repo is deprecated and we will soon stop actively maintaining it, as a more up-to-date (and simple

CMU Locus Lab 221 Dec 26, 2022
an implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation using PyTorch

revisiting-sepconv This is a reference implementation of Revisiting Adaptive Convolutions for Video Frame Interpolation [1] using PyTorch. Given two f

Simon Niklaus 59 Dec 22, 2022
SAN for Product Attributes Prediction

SAN Heterogeneous Star Graph Attention Network for Product Attributes Prediction This repository contains the official PyTorch implementation for ADVI

Xuejiao Zhao 9 Dec 12, 2022