A highly modular PyTorch framework with a focus on Neural Architecture Search (NAS).

Related tags

Deep Learninguninas
Overview

UniNAS

A highly modular PyTorch framework with a focus on Neural Architecture Search (NAS).

under development

(which happens mostly on our internal GitLab, we push only every once in a while to Github)

  • APIs may change
  • argparse arguments may be moved to more fitting classes
  • there may be incomplete or not-yet-working pieces of code
  • ...

Features

  • modular and therefore reusable
    • data set loading,
    • network building code and topologies,
    • methods to train architecture weights,
    • sets of operations (primitives),
    • weight initializers,
    • metrics,
    • ... and more
  • everything is configurable from the command line and/or config files
    • improved reproducibility, since detailed run configurations are saved and logged
    • powerful search network descriptions enable e.g. highly customizable weight sharing settings
    • the underlying argparse mechanism enables using a GUI for configurations
  • compare results of different methods in the same environment
  • import and export detailed network descriptions
  • integrate new methods and more with fairly little effort
  • NAS-Benchmark integration
    • NAS-Bench 201
  • ... and more

Where is this code from?

Except for a few pieces, the code is entirely self-written. However, sometimes the (official) code is useful to learn from or clear up some details, and other frameworks can be used for their nice features.

Other meta-NAS frameworks

  • Deep Architect
    • highly customizable search spaces, hyperparameters, ...
    • the searchers (SMBO, MCTS, ...) focus on fully training (many) models and are not differentiable
  • D-X-Y NAS-Projects
  • Auto-PyTorch
    • stronger focus on model selection than optimizing one architecture
  • Vega
  • NNI

Repository notes

Dynamic argparse tree

Everything is an argument. Learning rate? Argument. Scheduler? Argument. The exact topology of a Network, including how many of each cell and whether they share their architecture weights? Also arguments.

This is enabled by the idea that each used class (method, network, cells, regularizers, ...) can add arguments to argparse, including which further classes are required (e.g. a method needs a network, which needs a stem).

It starts with the Main class adding a Task (cls_task), which itself adds all required components (cls_*).

To see all available (meta) arguments, run Main.list_all_arguments() in uninas/main.py

Graphical user interface

Since putting together the arguments correctly is not trivial (and requires some familiarity with the code base), an easier approach is using a GUI.

Have a look at uninas/gui/tk_gui/main.py, a tkinter GUI frontend.

The GUI can automatically filter usable classes, display available arguments, and display tooltips; based only on the implemented argparse (meta) arguments in the respective classes.

Some meta arguments take a single class name:

e.g: cls_task, cls_trainer, cls_data, cls_criterion, cls_method

The chosen classes define their own arguments, e.g.:

  • cls_trainer="SimpleTrainer"
  • SimpleTrainer.max_epochs=100
  • SimpleTrainer.test_last=10

Their names are also available as wildcards, automatically using their respectively set class name:

  • cls_trainer="SimpleTrainer"
  • {cls_trainer}.max_epochs --> SimpleTrainer.max_epochs
  • {cls_trainer}.test_last --> SimpleTrainer.test_last

Some meta arguments take a comma-separated list of class names:

e.g. cls_metrics, cls_initializers, cls_regularizers, cls_optimizers, cls_schedulers

The chosen classes also define their own arguments, but always include an index, e.g.:

  • cls_regularizers="DropOutRegularizer, DropPathRegularizer"
  • DropOutRegularizer#0.prob=0.5
  • DropPathRegularizer#1.max_prob=0.3
  • DropPathRegularizer#1.drop_id_paths=false

And they are also available as indexed wildcards:

  • cls_regularizers="DropOutRegularizer, DropPathRegularizer"
  • {cls_regularizers#0}.prob --> DropOutRegularizer#0.prob
  • {cls_regularizers#1}.max_prob --> DropPathRegularizer#1.max_prob
  • {cls_regularizers#1}.drop_id_paths --> DropPathRegularizer#1.drop_id_paths

Register

UniNAS makes heavy use of a registering mechanism (via decorators in uninas/register.py). Classes of the same type (e.g. optimizers, networks, ...) will register in one RegisterDict.

Registered classes can be accessed via their name in the Register, no matter of their actual location in the code. This enables e.g. saving network topologies as nested dictionaries, no matter how complicated they are, since the class names are enough to find the classes in the code. (It also grants a certain amount of refactoring-freedom.)

Exporting networks

(Trained) Networks can easily be used by other PyTorch frameworks/scripts, see verify.py for an easy example.

Citation

The framework

we will possibly create a whitepaper at some point

@misc{kl2020uninas,
  author = {Kevin Alexander Laube},
  title = {UniNAS},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/cogsys-tuebingen/uninas}}
}

Inter-choice dependent super-network weights

  1. Train super-networks, e.g. via experiments/demo/inter_choice_weights/icw1_train_supernet_nats.py
    • you will need Cifar10, but can also easily use fake data or download it
    • to generate SubImageNet see uninas/utils/generate/data/subImageNet
  2. Evaluate the super-network, e.g. via experiments/demo/inter_choice_weights/icw2_eval_supernet.py
  3. View the evaluation results in the save dir, in TensorBoard or plotted directly
@article{laube2021interchoice,
  title={Inter-choice dependent super-network weights},
  author={Kevin Alexander Laube, Andreas Zell},
  journal={arXiv preprint arXiv:2104.11522},
  year={2021}
}
Owner
Cognitive Systems Research Group
Autonomous Mobile Robots; Bioinformatics; Chemo- and Geoinformatics; Evolutionary Algorithms; Machine Learning
Cognitive Systems Research Group
Final Project for the CS238: Decision Making Under Uncertainty course at Stanford University in Autumn '21.

Final Project for the CS238: Decision Making Under Uncertainty course at Stanford University in Autumn '21. We optimized wind turbine placement in a wind farm, subject to wake effects, using Q-learni

Manasi Sharma 2 Sep 27, 2022
MADT: Offline Pre-trained Multi-Agent Decision Transformer

MADT: Offline Pre-trained Multi-Agent Decision Transformer A link to our paper can be found on Arxiv. Overview Official codebase for Offline Pre-train

Linghui Meng 51 Dec 21, 2022
Implementation of FSGNN

FSGNN Implementation of FSGNN. For more details, please refer to our paper Experiments were conducted with following setup: Pytorch: 1.6.0 Python: 3.8

19 Dec 05, 2022
The official implementation of paper Siamese Transformer Pyramid Networks for Real-Time UAV Tracking, accepted by WACV22

SiamTPN Introduction This is the official implementation of the SiamTPN (WACV2022). The tracker intergrates pyramid feature network and transformer in

Robotics and Intelligent Systems Control @ NYUAD 29 Jan 08, 2023
Repository for the NeurIPS 2021 paper: "Exploiting Domain-Specific Features to Enhance Domain Generalization".

meta-Domain Specific-Domain Invariant (mDSDI) Source code implementation for the paper: Manh-Ha Bui, Toan Tran, Anh Tuan Tran, Dinh Phung. "Exploiting

VinAI Research 12 Nov 25, 2022
This repository contains all the code and materials distributed in the 2021 Q-Programming Summer of Qode.

Q-Programming Summer of Qode This repository contains all the code and materials distributed in the Q-Programming Summer of Qode. If you want to creat

Sammarth Kumar 11 Jun 11, 2021
PG2Net: Personalized and Group PreferenceGuided Network for Next Place Prediction

PG2Net PG2Net:Personalized and Group Preference Guided Network for Next Place Prediction Datasets Experiment results on two Foursquare check-in datase

Urban Mobility 5 Dec 20, 2022
Face and other object detection using OpenCV and ML Yolo

Object-and-Face-Detection-Using-Yolo- Opencv and YOLO object and face detection is implemented. You only look once (YOLO) is a state-of-the-art, real-

Happy N. Monday 3 Feb 15, 2022
CS50x-AI - Artificial Intelligence with Python from Harvard University

CS50x-AI Artificial Intelligence with Python from Harvard University 📖 Table of

Hosein Damavandi 6 Aug 22, 2022
MultiLexNorm 2021 competition system from ÚFAL

ÚFAL at MultiLexNorm 2021: Improving Multilingual Lexical Normalization by Fine-tuning ByT5 David Samuel & Milan Straka Charles University Faculty of

ÚFAL 13 Jun 28, 2022
Explicable Reward Design for Reinforcement Learning Agents [NeurIPS'21]

Explicable Reward Design for Reinforcement Learning Agents [NeurIPS'21]

3 May 12, 2022
A series of convenience functions to make basic image processing operations such as translation, rotation, resizing, skeletonization, and displaying Matplotlib images easier with OpenCV and Python.

imutils A series of convenience functions to make basic image processing functions such as translation, rotation, resizing, skeletonization, and displ

Adrian Rosebrock 4.3k Jan 08, 2023
URIE: Universal Image Enhancementfor Visual Recognition in the Wild

URIE: Universal Image Enhancementfor Visual Recognition in the Wild This is the implementation of the paper "URIE: Universal Image Enhancement for Vis

Taeyoung Son 43 Sep 12, 2022
Official PyTorch Implementation of Rank & Sort Loss [ICCV2021]

Rank & Sort Loss for Object Detection and Instance Segmentation The official implementation of Rank & Sort Loss. Our implementation is based on mmdete

Kemal Oksuz 229 Dec 20, 2022
BASH - Biomechanical Animated Skinned Human

We developed a method animating a statistical 3D human model for biomechanical analysis to increase accessibility for non-experts, like patients, athletes, or designers.

Machine Learning and Data Analytics Lab FAU 66 Nov 19, 2022
Face Synthetics dataset is a collection of diverse synthetic face images with ground truth labels.

The Face Synthetics dataset Face Synthetics dataset is a collection of diverse synthetic face images with ground truth labels. It was introduced in ou

Microsoft 608 Jan 02, 2023
Xview3 solution - XView3 challenge, 2nd place solution

Xview3, 2nd place solution https://iuu.xview.us/ test split aggregate score publ

Selim Seferbekov 24 Nov 23, 2022
SAS output to EXCEL converter for Cornell/MIT Language and acquisition lab

CORNELLSASLAB SAS output to EXCEL converter for Cornell/MIT Language and acquisition lab Instructions: This python code can be used to convert SAS out

2 Jan 26, 2022
RepVGG: Making VGG-style ConvNets Great Again

This repository is the code that needs to be submitted for OpenMMLab Algorithm Ecological Challenge,the paper is RepVGG: Making VGG-style ConvNets Great Again

Ty Feng 62 May 21, 2022
Neural Network Libraries

Neural Network Libraries Neural Network Libraries is a deep learning framework that is intended to be used for research, development and production. W

Sony 2.6k Dec 30, 2022