Active learning for Mask R-CNN in Detectron2

Related tags

Deep Learningmaskal
Overview

MaskAL - Active learning for Mask R-CNN in Detectron2

maskAL_framework

Summary

MaskAL is an active learning framework that automatically selects the most-informative images for training Mask R-CNN. By using MaskAL, it is possible to reduce the number of image annotations, without negatively affecting the performance of Mask R-CNN. Generally speaking, MaskAL involves the following steps:

  1. Train Mask R-CNN on a small initial subset of a bigger dataset
  2. Use the trained Mask R-CNN algorithm to make predictions on the unlabelled images of the remaining dataset
  3. Select the most-informative images with a sampling algorithm
  4. Annotate the most-informative images, and then retrain Mask R-CNN on the most informative-images
  5. Repeat step 2-4 for a specified number of sampling iterations

The figure below shows the performance improvement of MaskAL on our dataset. By using MaskAL, the performance of Mask R-CNN improved more quickly and therefore 1400 annotations could be saved (see the black dashed line):

maskAL_graph

Installation

See INSTALL.md

Data preparation and training

Split the dataset in a training set, validation set and a test set. It is not required to annotate every image in the training set, because MaskAL will select the most-informative images automatically.

  1. From the training set, a smaller initial dataset is randomly sampled (the dataset size can be specified in the maskAL.yaml file). The images that do not have an annotation are placed in the annotate subfolder inside the image folder. You first need to annotate these images with LabelMe (json), V7-Darwin (json), Supervisely (json) or CVAT (xml) (when using CVAT, export the annotations to LabelMe 3.0 format). Refer to our annotation procedure: ANNOTATION.md
  2. Step 1 is repeated for the validation set and the test set (the file locations can be specified in the maskAL.yaml file).
  3. After the first training iteration of Mask R-CNN, the sampling algorithm selects the most-informative images (its size can be specified in the maskAL.yaml file).
  4. The most-informative images that don't have an annotation, are placed in the annotate subfolder. Annotate these images with LabelMe (json), V7-Darwin (json), Supervisely (json) or CVAT (xml) (when using CVAT, export the annotations to LabelMe 3.0 format).
  5. OPTIONAL: it is possible to use the trained Mask R-CNN model to auto-annotate the unlabelled images to further reduce annotation time. Activate auto_annotate in the maskAL.yaml file, and specify the export_format (currently supported formats: 'labelme', 'cvat', 'darwin', 'supervisely').
  6. Step 3-5 are repeated for several training iterations. The number of iterations (loops) can be specified in the maskAL.yaml file.

Please note that MaskAL does not work with the default COCO json-files of detectron2. These json-files contain all annotations that are completed before the training starts. Because MaskAL involves an iterative train and annotation procedure, the default COCO json-files lack the desired format.

How to use MaskAL

Open a terminal (Ctrl+Alt+T):

(base) [email protected]:~$ cd maskal
(base) [email protected]:~/maskal$ conda activate maskAL
(maskAL) [email protected]:~/maskal$ python maskAL.py --config maskAL.yaml

Change the following settings in the maskAL.yaml file:
Setting Description
weightsroot The file directory where the weight-files are stored
resultsroot The file directory where the result-files are stored
dataroot The root directory where all image-files are stored
use_initial_train_dir Set this to True when you want to start the active-learning from an initial training dataset. When False, the initial dataset of size initial_datasize is randomly sampled from the traindir
initial_train_dir When use_initial_train_dir is activated: the file directory where the initial training images and annotations are stored
traindir The file directory where the training images and annotations are stored
valdir The file directory where the validation images and annotations are stored
testdir The file directory where the test images and annotations are stored
network_config The Mask R-CNN configuration-file (.yaml) file (see the folder './configs')
pretrained_weights The pretrained weights to start the active-learning. Either specify the network_config (.yaml) or a custom weights-file (.pth or .pkl)
cuda_visible_devices The identifiers of the CUDA device(s) you want to use for training and sampling (in string format, for example: '0,1')
classes The names of the classes in the image annotations
learning_rate The learning-rate to train Mask R-CNN (default value: 0.01)
confidence_threshold Confidence-threshold for the image analysis with Mask R-CNN (default value: 0.5)
nms_threshold Non-maximum suppression threshold for the image analysis with Mask R-CNN (default value: 0.3)
initial_datasize The size of the initial dataset to start the active learning (when use_initial_train_dir is False)
pool_size The number of most-informative images that are selected from the traindir
loops The number of sampling iterations
auto_annotate Set this to True when you want to auto-annotate the unlabelled images
export_format When auto_annotate is activated: specify the export-format of the annotations (currently supported formats: 'labelme', 'cvat', 'darwin', 'supervisely')
supervisely_meta_json When supervisely auto_annotate is activated: specify the file location of the meta.json for supervisely export

Description of the other settings in the maskAL.yaml file: MISC_SETTINGS.md

Please refer to the folder active_learning/config for more setting-files.

Other software scripts

Use a trained Mask R-CNN algorithm to auto-annotate unlabelled images: auto_annotate.py

Argument Description
--img_dir The file directory where the unlabelled images are stored
--network_config Configuration of the backbone of the network
--classes The names of the classes of the annotated instances
--conf_thres Confidence threshold of the CNN to do the image analysis
--nms_thres Non-maximum suppression threshold of the CNN to do the image analysis
--weights_file Weight-file (.pth) of the trained CNN
--export_format Specifiy the export-format of the annotations (currently supported formats: 'labelme', 'cvat', 'darwin', 'supervisely')
--supervisely_meta_json The file location of the meta.json for supervisely export

Example syntax (auto_annotate.py):

python auto_annotate.py --img_dir datasets/train --network_config COCO-InstanceSegmentation/mask_rcnn_X_101_32x8d_FPN_3x.yaml --classes healthy damaged matured cateye headrot --conf_thres 0.5 --nms_thres 0.2 --weights_file weights/broccoli/model_final.pth --export_format supervisely --supervisely_meta_json datasets/meta.json

Troubleshooting

See TROUBLESHOOTING.md

Citation

See our research article for more information or cross-referencing:

@misc{blok2021active,
      title={Active learning with MaskAL reduces annotation effort for training Mask R-CNN}, 
      author={Pieter M. Blok and Gert Kootstra and Hakim Elchaoui Elghor and Boubacar Diallo and Frits K. van Evert and Eldert J. van Henten},
      year={2021},
      eprint={2112.06586},
      archivePrefix={arXiv},
      primaryClass={cs.CV},
      url = {https://arxiv.org/abs/2112.06586},
}

License

Our software was forked from Detectron2 (https://github.com/facebookresearch/detectron2). As such, the software will be released under the Apache 2.0 license.

Acknowledgements

The uncertainty calculation methods were inspired by the research of Doug Morrison:
https://nikosuenderhauf.github.io/roboticvisionchallenges/assets/papers/CVPR19/rvc_4.pdf

Two software methods were inspired by the work of RovelMan:
https://github.com/RovelMan/active-learning-framework

MaskAL uses the Bayesian Active Learning (BaaL) software:
https://github.com/ElementAI/baal

Contact

MaskAL is developed and maintained by Pieter Blok.

Official PyTorch implementation of "Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble" (NeurIPS'21)

Uncertainty-Based Offline Reinforcement Learning with Diversified Q-Ensemble This is the code for reproducing the results of the paper Uncertainty-Bas

43 Nov 23, 2022
PRTR: Pose Recognition with Cascade Transformers

PRTR: Pose Recognition with Cascade Transformers Introduction This repository is the official implementation for Pose Recognition with Cascade Transfo

mlpc-ucsd 133 Dec 30, 2022
Implementation of: "Exploring Randomly Wired Neural Networks for Image Recognition"

RandWireNN Unofficial PyTorch Implementation of: Exploring Randomly Wired Neural Networks for Image Recognition. Results Validation result on Imagenet

Seung-won Park 684 Nov 02, 2022
[ICCV 2021] Deep Hough Voting for Robust Global Registration

Deep Hough Voting for Robust Global Registration, ICCV, 2021 Project Page | Paper | Video Deep Hough Voting for Robust Global Registration Junha Lee1,

Junha Lee 10 Dec 02, 2022
[ICRA2021] Reconstructing Interactive 3D Scene by Panoptic Mapping and CAD Model Alignment

Interactive Scene Reconstruction Project Page | Paper This repository contains the implementation of our ICRA2021 paper Reconstructing Interactive 3D

97 Dec 28, 2022
Source code for CVPR 2020 paper "Learning to Forget for Meta-Learning"

L2F - Learning to Forget for Meta-Learning Sungyong Baik, Seokil Hong, Kyoung Mu Lee Source code for CVPR 2020 paper "Learning to Forget for Meta-Lear

Sungyong Baik 29 May 22, 2022
A unofficial pytorch implementation of PAN(PSENet2): Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network

Efficient and Accurate Arbitrary-Shaped Text Detection with Pixel Aggregation Network Requirements pytorch 1.1+ torchvision 0.3+ pyclipper opencv3 gcc

zhoujun 400 Dec 26, 2022
An implementation for the loss function proposed in Decoupled Contrastive Loss paper.

Decoupled-Contrastive-Learning This repository is an implementation for the loss function proposed in Decoupled Contrastive Loss paper. Requirements P

Ramin Nakhli 71 Dec 04, 2022
Official implementation for "Image Quality Assessment using Contrastive Learning"

Image Quality Assessment using Contrastive Learning Pavan C. Madhusudana, Neil Birkbeck, Yilin Wang, Balu Adsumilli and Alan C. Bovik This is the offi

Pavan Chennagiri 67 Dec 30, 2022
BED: A Real-Time Object Detection System for Edge Devices

BED: A Real-Time Object Detection System for Edge Devices About this project Thi

Data Analytics Lab at Texas A&M University 44 Nov 18, 2022
Earthquake detection via fiber optic cables using deep learning

Earthquake detection via fiber optic cables using deep learning Author: Fantine Huot Getting started Update the submodules After cloning the repositor

Fantine 4 Nov 30, 2022
ColossalAI-Benchmark - Performance benchmarking with ColossalAI

Benchmark for Tuning Accuracy and Efficiency Overview The benchmark includes our

HPC-AI Tech 31 Oct 07, 2022
A smaller subset of 10 easily classified classes from Imagenet, and a little more French

Imagenette 🎶 Imagenette, gentille imagenette, Imagenette, je te plumerai. 🎶 (Imagenette theme song thanks to Samuel Finlayson) NB: Versions of Image

fast.ai 718 Jan 01, 2023
Leaf: Multiple-Choice Question Generation

Leaf: Multiple-Choice Question Generation Easy to use and understand multiple-choice question generation algorithm using T5 Transformers. The applicat

Kristiyan Vachev 62 Dec 20, 2022
Leveraging Instance-, Image- and Dataset-Level Information for Weakly Supervised Instance Segmentation

Leveraging Instance-, Image- and Dataset-Level Information for Weakly Supervised Instance Segmentation This paper has been accepted and early accessed

Yun Liu 39 Sep 20, 2022
Official PyTorch implementation of "RMGN: A Regional Mask Guided Network for Parser-free Virtual Try-on" (IJCAI-ECAI 2022)

RMGN-VITON RMGN: A Regional Mask Guided Network for Parser-free Virtual Try-on In IJCAI-ECAI 2022(short oral). [Paper] [Supplementary Material] Abstra

27 Dec 01, 2022
Attentive Implicit Representation Networks (AIR-Nets)

Attentive Implicit Representation Networks (AIR-Nets) Preprint | Supplementary | Accepted at the International Conference on 3D Vision (3DV) teaser.mo

29 Dec 07, 2022
Code for You Only Cut Once: Boosting Data Augmentation with a Single Cut

You Only Cut Once (YOCO) YOCO is a simple method/strategy of performing augmenta

88 Dec 28, 2022
Implementation of ECCV20 paper: the devil is in classification: a simple framework for long-tail object detection and instance segmentation

Implementation of our ECCV 2020 paper The Devil is in Classification: A Simple Framework for Long-tail Instance Segmentation This repo contains code o

twang 98 Sep 17, 2022
Pytorch port of Google Research's LEAF Audio paper

leaf-audio-pytorch Pytorch port of Google Research's LEAF Audio paper published at ICLR 2021. This port is not completely finished, but the Leaf() fro

Dennis Fedorishin 80 Oct 31, 2022