An example to implement a new backbone with OpenMMLab framework.

Overview

Backbone example on OpenMMLab framework

English | 简体中文

Introduction

This is an template repo about how to use OpenMMLab framework to develop a new backbone for multiple vision tasks.

With OpenMMLab framework, you can easily develop a new backbone and use MMClassification, MMDetection and MMSegmentation to benchmark your backbone on classification, detection and segmentation tasks.

Setup environment

It requires PyTorch and the following OpenMMLab packages:

  • MIM: A command-line tool to manage OpenMMLab packages and experiments.
  • MMCV: OpenMMLab foundational library for computer vision.
  • MMClassification: OpenMMLab image classification toolbox and benchmark. Besides classification, it's also a repository to store various backbones.
  • MMDetection: OpenMMLab detection toolbox and benchmark.
  • MMSegmentation: OpenMMLab semantic segmentation toolbox and benchmark.

Assume you have prepared your Python and PyTorch environment, just use the following command to setup the environment.

pip install openmim mmcls mmdet mmsegmentation
mim install mmcv-full

Data preparation

The data structure looks like below:

data/
├── imagenet
│   ├── train
│   ├── val
│   └── meta
│       ├── train.txt
│       └── val.txt
├── ade
│   └── ADEChallengeData2016
│       ├── annotations
│       └── images
└── coco
    ├── annotations
    │   ├── instance_train2017.json
    │   └── instance_val2017.json
    ├── train2017
    └── val2017

Here, we only list the minimal files for training and validation on ImageNet (classification), ADE20K (segmentation) and COCO (object detection).

If you want benchmark on more datasets or tasks, for example, panoptic segmentation with MMDetection, just organize your dataset according to MMDetection's requirements. For semantic segmentation task, you can organize your dataset according to this tutorial

Usage

Implement your backbone

In this example repository, we use the ConvNeXt as an example to show how to implement a backbone quickly.

  1. Create your backbone file and put it in the models folder. In this example, models/convnext.py.

    In this file, just implement your backbone with PyTorch with two modifications:

    1. The backbone and modules should inherits mmcv.runner.BaseModule. The BaseModule is almost the same as the torch.nn.Module, and supports using init_cfg to specify the initizalization method includes pre-trained model.

    2. Use one-line decorator as below to register the backbone class to the mmcls.models.BACKBONES registry.

      @BACKBONES.register_module(force=True)

      What is registry? Have a look at here!

  2. [Optional] If you want to add some extra components for specific task, you can also add it refers to models/det/layer_decay_optimizer_constructor.py.

  3. Add your backbone class and custom components to models/__init__.py.

Create config files

Add your config files for each task to configs/. If your are not familiar with config files, the tutorial can help you.

In a word, use base config files of model, dataset, schedule and runtime to compose your config files. Of course, you can also override some settings of base config in your config files, even write all settings in one file.

In this template, we provide a suit of popular base config files, you can also find more useful base configs from mmcls, mmdet and mmseg.

Training and testing

For training and testing, you can directly use mim to train and test the model

At first, you need to add the current folder the the PYTHONPATH, so that Python can find your model files.

export PYTHONPATH=`pwd`:$PYTHONPATH 

On local single GPU:

# train classification models
mim train mmcls $CONFIG --work-dir $WORK_DIR

# test classification models
mim test mmcls $CONFIG -C $CHECKPOINT --metrics accuracy --metric-options "topk=(1, 5)"

# train object detection / instance segmentation models
mim train mmdet $CONFIG --work-dir $WORK_DIR

# test object detection / instance segmentation models
mim test mmdet $CONFIG -C $CHECKPOINT --eval bbox segm

# train semantic segmentation models
mim train mmseg $CONFIG --work-dir $WORK_DIR

# test semantic segmentation models
mim test mmseg $CONFIG -C $CHECKPOINT --eval mIoU
  • CONFIG: the config files under the directory configs/
  • WORK_DIR: the working directory to save configs, logs, and checkpoints
  • CHECKPOINT: the path of the checkpoint downloaded from our model zoo or trained by yourself

On multiple GPUs (4 GPUs here):

# train classification models
mim train mmcls $CONFIG --work-dir $WORK_DIR --launcher pytorch --gpus 4

# test classification models
mim test mmcls $CONFIG -C $CHECKPOINT --metrics accuracy --metric-options "topk=(1, 5)" --launcher pytorch --gpus 4

# train object detection / instance segmentation models
mim train mmdet $CONFIG --work-dir $WORK_DIR --launcher pytorch --gpus 4

# test object detection / instance segmentation models
mim test mmdet $CONFIG -C $CHECKPOINT --eval bbox segm --launcher pytorch --gpus 4

# train semantic segmentation models
mim train mmseg $CONFIG --work-dir $WORK_DIR --launcher pytorch --gpus 4 

# test semantic segmentation models
mim test mmseg $CONFIG -C $CHECKPOINT --eval mIoU --launcher pytorch --gpus 4
  • CONFIG: the config files under the directory configs/
  • WORK_DIR: the working directory to save configs, logs, and checkpoints
  • CHECKPOINT: the path of the checkpoint downloaded from our model zoo or trained by yourself

On multiple GPUs in multiple nodes with Slurm (total 16 GPUs here):

# train classification models
mim train mmcls $CONFIG --work-dir $WORK_DIR --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION

# test classification models
mim test mmcls $CONFIG -C $CHECKPOINT --metrics accuracy --metric-options "topk=(1, 5)" --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION

# train object detection / instance segmentation models
mim train mmdet $CONFIG --work-dir $WORK_DIR --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION

# test object detection / instance segmentation models
mim test mmdet $CONFIG -C $CHECKPOINT --eval bbox segm --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION

# train semantic segmentation models
mim train mmseg $CONFIG --work-dir $WORK_DIR --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION

# test semantic segmentation models
mim test mmseg $CONFIG -C $CHECKPOINT --eval mIoU --launcher slurm --gpus 16 --gpus-per-node 8 --partition $PARTITION
  • CONFIG: the config files under the directory configs/
  • WORK_DIR: the working directory to save configs, logs, and checkpoints
  • CHECKPOINT: the path of the checkpoint downloaded from our model zoo or trained by yourself
  • PARTITION: the slurm partition you are using
Owner
Ma Zerun
Ma Zerun
Learning to Prompt for Vision-Language Models.

CoOp Paper: Learning to Prompt for Vision-Language Models Authors: Kaiyang Zhou, Jingkang Yang, Chen Change Loy, Ziwei Liu CoOp (Context Optimization)

Kaiyang 679 Jan 04, 2023
This is an example of object detection on Micro bacterium tuberculosis using Mask-RCNN

Mask-RCNN on Mycobacterium tuberculosis This is an example of object detection on Mycobacterium Tuberculosis using Mask RCNN. Implement of Mask R-CNN

Jun-En Ding 1 Sep 16, 2021
OpenCVのGrabCut()を利用したセマンティックセグメンテーション向けアノテーションツール(Annotation tool using GrabCut() of OpenCV. It can be used to create datasets for semantic segmentation.)

[Japanese/English] GrabCut-Annotation-Tool GrabCut-Annotation-Tool.mp4 OpenCVのGrabCut()を利用したアノテーションツールです。 セマンティックセグメンテーション向けのデータセット作成にご使用いただけます。 ※Grab

KazuhitoTakahashi 30 Nov 18, 2022
Efficient Householder transformation in PyTorch

Efficient Householder Transformation in PyTorch This repository implements the Householder transformation algorithm for calculating orthogonal matrice

Anton Obukhov 49 Nov 20, 2022
Control-Robot-Arm-using-PS4-Controller - A Robotic Arm based on Raspberry Pi and Arduino that controlled by PS4 Controller

Control-Robot-Arm-using-PS4-Controller You can see all details about this Robot

MohammadReza Sharifi 5 Jan 01, 2022
The Rich Get Richer: Disparate Impact of Semi-Supervised Learning

The Rich Get Richer: Disparate Impact of Semi-Supervised Learning Preprocess file of the dataset used in implicit sub-populations: (Demographic groups

<a href=[email protected]"> 4 Oct 14, 2022
This is a demo app to be used in the video streaming applications

MoViDNN: A Mobile Platform for Evaluating Video Quality Enhancement with Deep Neural Networks MoViDNN is an Android application that can be used to ev

ATHENA Christian Doppler (CD) Laboratory 7 Jul 21, 2022
Lightweight library to build and train neural networks in Theano

Lasagne Lasagne is a lightweight library to build and train neural networks in Theano. Its main features are: Supports feed-forward networks such as C

Lasagne 3.8k Dec 29, 2022
Tensorflow python implementation of "Learning High Fidelity Depths of Dressed Humans by Watching Social Media Dance Videos"

Learning High Fidelity Depths of Dressed Humans by Watching Social Media Dance Videos This repository is the official tensorflow python implementation

Yasamin Jafarian 287 Jan 06, 2023
Contrastive Language-Image Pretraining

CLIP [Blog] [Paper] [Model Card] [Colab] CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pair

OpenAI 11.5k Jan 08, 2023
Random Walk Graph Neural Networks

Random Walk Graph Neural Networks This repository is the official implementation of Random Walk Graph Neural Networks. Requirements Code is written in

Giannis Nikolentzos 38 Jan 02, 2023
Part-Aware Data Augmentation for 3D Object Detection in Point Cloud

Part-Aware Data Augmentation for 3D Object Detection in Point Cloud This repository contains a reference implementation of our Part-Aware Data Augment

Jaeseok Choi 62 Jan 03, 2023
Efficient Sharpness-aware Minimization for Improved Training of Neural Networks

Efficient Sharpness-aware Minimization for Improved Training of Neural Networks Code for “Efficient Sharpness-aware Minimization for Improved Training

Angusdu 32 Oct 18, 2022
PyTorch implementation of MoCo: Momentum Contrast for Unsupervised Visual Representation Learning

MoCo: Momentum Contrast for Unsupervised Visual Representation Learning This is a PyTorch implementation of the MoCo paper: @Article{he2019moco, aut

Meta Research 3.7k Jan 02, 2023
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 05, 2022
A PyTorch implementation of "ANEMONE: Graph Anomaly Detection with Multi-Scale Contrastive Learning", CIKM-21

ANEMONE A PyTorch implementation of "ANEMONE: Graph Anomaly Detection with Multi-Scale Contrastive Learning", CIKM-21 Dependencies python==3.6.1 dgl==

Graph Analysis & Deep Learning Laboratory, GRAND 30 Dec 14, 2022
【steal piano】GitHub偷情分析工具!

【steal piano】GitHub偷情分析工具! 你是否有这样的困扰,有一天你的仓库被很多人加了star,但是你却不知道这些人都是从哪来的? 别担心,GitHub偷情分析工具帮你轻松解决问题! 原理 GitHub偷情分析工具透过分析star的时间以及他们之间的follow关系,可以推测出每个st

黄巍 442 Dec 21, 2022
Source code for the paper "Periodic Traveling Waves in an Integro-Difference Equation With Non-Monotonic Growth and Strong Allee Effect"

Source code for the paper "Periodic Traveling Waves in an Integro-Difference Equation With Non-Monotonic Growth and Strong Allee Effect" by Michael Ne

M Nestor 1 Apr 19, 2022
Implementation of the GVP-Transformer, which was used in the paper "Learning inverse folding from millions of predicted structures" for de novo protein design alongside Alphafold2

GVP Transformer (wip) Implementation of the GVP-Transformer, which was used in the paper Learning inverse folding from millions of predicted structure

Phil Wang 19 May 06, 2022
Zero-shot Learning by Generating Task-specific Adapters

Code for "Zero-shot Learning by Generating Task-specific Adapters" This is the repository containing code for "Zero-shot Learning by Generating Task-s

INK Lab @ USC 11 Dec 17, 2021