This project hosts the code for implementing the ISAL algorithm for object detection and image classification

Related tags

Deep LearningISAL
Overview

Influence Selection for Active Learning (ISAL)

This project hosts the code for implementing the ISAL algorithm for object detection and image classification, as presented in our paper:

Influence Selection for Active Learning;
Zhuoming Liu, Hao Ding, Huaping Zhong, Weijia Li, Jifeng Dai, Conghui He;
In: Proc. Int. Conf. Computer Vision (ICCV), 2021.
arXiv preprint arXiv:2108.09331

The full paper is available at: https://arxiv.org/abs/2108.09331.

Implementation based on MMDetection is included in MMDetection.

Highlights

  • Task agnostic: We evaluate ISAL in both object detection and image classification. Compared with previous methods, ISAL decreases the annotation cost at least by 12%, 12%, 3%, 13% and 16% on CIFAR10, SVHN, CIFAR100, VOC2012 and COCO, respectively.

  • Model agnostic: We evaluate ISAL with different model in object detection. On COCO dataset, with one-stage anchor-free detector FCOS, ISAL decreases the annotation cost at least by 16%. With two-stage anchor-based detector Faster R-CNN, ISAL decreases the annotation cost at least by 10%.

ISAL just needs to use the model gradients, which can be easily obtained in a neural network no matter what task is and how complex the model structure is, our proposed ISAL is task-agnostic and model-agnostic.

Required hardware

We use 4 NVIDIA V100 GPUs for object detection. We use 1 NVIDIA TITAN Xp GPUs for image classification.

Installation

Our ISAL implementation for object detection is based on mmdetection v2.4.0 with mmcv v1.1.1. Their need Pytorch version = 1.5, CUDA version = 10.1, CUDNN version = 7. We provide a docker file (./detection/Dockerfile) to prepare the environment. Once the environment is prepared, please copy all the files under the folder ./detection into the directory /mmdetection in the docker.

Our ISAL implementation for image classification is based on pycls v0.1. It need Pytorch version = 1.6, CUDA version = 10.1, CUDNN version = 7.

Training

The following command line will perform the ISAL algorithm with FCOS detector on COCO dataset, the active learning algorithm will iterate 20 steps with 4 GPUS:

bash dist_run_isal.sh /workdir /datadir \
    /mmdetection/configs/mining_experiments/ \
    fcos/fcos_r50_caffe_fpn_1x_coco_influence_function.py \
    --mining-method=influence --seed=42 --deterministic \
    --noised-score-thresh=0.1

Note that:

  1. If you want to use fewer GPUs, please change GPUS in shell script. In addition, you may need to change the samples_per_gpu in the config file to mantain the total batch size is equal to 8.
  2. The models and all inference results will be saved into /workdir.
  3. The data should be place in /datadir.
  4. If you want to run our code on VOC or your own dataset, we suggest that you should change the data format into COCO format.
  5. If you want to change the active learning iteration steps, please change the TRAIN_STEP in shell script. If you want to change the image selected by step_0 or the following steps, please change the INIT_IMG_NUM or IMG_NUM in shell script, respectively.
  6. The shell script will delete all the trained models after all the active learning steps. If you want to maintain the models please change the DELETE_MODEL in shell script.

The following command line will perform the ISAL algorithm with ResNet-18 on CIFAR10 dataset, the active learning algorithm will iterate 10 steps with 1 GPU:

bash run_isal.sh /workdir /datadir \
    pycls/configs/archive/cifar/resnet/R-18_nds_1gpu_cifar10.yaml \
    --mining-method=influence --random-seed=0

Note that:

  1. The models and all inference results will be saved into /workdir.
  2. The data should be place in /datadir.
  3. If you want to train SHVN or your own dataset, we suggest that you should change the data format into CIFAR10 format.
  4. The STEP in shell script indicates that in each active learning step the algorithm will add (1/STEP)% of the whole dataset into labeled dataset. The TRAIN_STEP indicates the total steps of active learning algorithm.

Citations

Please consider citing our paper in your publications if the project helps your research. BibTeX reference is as follows.

@inproceedings{liu2021influence,
  title={Influence selection for active learning},
  author={Liu, Zhuoming and Ding, Hao and Zhong, Huaping and Li, Weijia and Dai, Jifeng and He, Conghui},
  booktitle={Proceedings of the IEEE/CVF International Conference on Computer Vision},
  pages={9274--9283},
  year={2021}
}

Acknowledgments

We thank Zheng Zhu for implementing the classification pipeline. We thank Bin Wang and Xizhou Zhu for discussion and helping with the experiments. We thank Yuan Tian and Jiamin He for discussing the mathematic derivation.

License

For academic use only. For commercial use, please contact the authors.

Official implementation of Pixel-Level Bijective Matching for Video Object Segmentation

BMVOS This is the official implementation of Pixel-Level Bijective Matching for Video Object Segmentation, to appear in WACV 2022. @article{cho2021pix

Suhwan Cho 13 Dec 14, 2022
[NeurIPS 2021] Well-tuned Simple Nets Excel on Tabular Datasets

[NeurIPS 2021] Well-tuned Simple Nets Excel on Tabular Datasets Introduction This repo contains the source code accompanying the paper: Well-tuned Sim

52 Jan 04, 2023
OptNet: Differentiable Optimization as a Layer in Neural Networks

OptNet: Differentiable Optimization as a Layer in Neural Networks This repository is by Brandon Amos and J. Zico Kolter and contains the PyTorch sourc

CMU Locus Lab 428 Dec 24, 2022
Code Release for ICCV 2021 (oral), "AdaFit: Rethinking Learning-based Normal Estimation on Point Clouds"

AdaFit: Rethinking Learning-based Normal Estimation on Point Clouds (ICCV 2021 oral) **Project Page | Arxiv ** Runsong Zhu¹, Yuan Liu², Zhen Dong¹, Te

40 Dec 30, 2022
an implementation of softmax splatting for differentiable forward warping using PyTorch

softmax-splatting This is a reference implementation of the softmax splatting operator, which has been proposed in Softmax Splatting for Video Frame I

Simon Niklaus 338 Dec 28, 2022
Code accompanying "Dynamic Neural Relational Inference" from CVPR 2020

Code accompanying "Dynamic Neural Relational Inference" This codebase accompanies the paper "Dynamic Neural Relational Inference" from CVPR 2020. This

Colin Graber 48 Dec 23, 2022
Template repository to build PyTorch projects from source on any version of PyTorch/CUDA/cuDNN.

The Ultimate PyTorch Source-Build Template Translations: 한국어 TL;DR PyTorch built from source can be x4 faster than a naïve PyTorch install. This repos

Joonhyung Lee/이준형 651 Dec 12, 2022
Tensorflow implementation of Fully Convolutional Networks for Semantic Segmentation

FCN.tensorflow Tensorflow implementation of Fully Convolutional Networks for Semantic Segmentation (FCNs). The implementation is largely based on the

Sarath Shekkizhar 1.3k Dec 25, 2022
A PyTorch implementation of Radio Transformer Networks from the paper "An Introduction to Deep Learning for the Physical Layer".

An Introduction to Deep Learning for the Physical Layer An usable PyTorch implementation of the noisy autoencoder infrastructure in the paper "An Intr

Gram.AI 120 Nov 21, 2022
ByteTrack超详细教程!训练自己的数据集&&摄像头实时检测跟踪

ByteTrack超详细教程!训练自己的数据集&&摄像头实时检测跟踪

Double-zh 45 Dec 19, 2022
PyTorch implementation for paper "Full-Body Visual Self-Modeling of Robot Morphologies".

Full-Body Visual Self-Modeling of Robot Morphologies Boyuan Chen, Robert Kwiatkowskig, Carl Vondrick, Hod Lipson Columbia University Project Website |

Boyuan Chen 32 Jan 02, 2023
Code, final versions, and information on the Sparkfun Graphical Datasheets

Graphical Datasheets Code, final versions, and information on the SparkFun Graphical Datasheets. Generated Cells After Running Script Example Complete

SparkFun Electronics 102 Jan 05, 2023
Human head pose estimation using Keras over TensorFlow.

RealHePoNet: a robust single-stage ConvNet for head pose estimation in the wild.

Rafael Berral Soler 71 Jan 05, 2023
Large-Scale Pre-training for Person Re-identification with Noisy Labels (LUPerson-NL)

LUPerson-NL Large-Scale Pre-training for Person Re-identification with Noisy Labels (LUPerson-NL) The repository is for our CVPR2022 paper Large-Scale

43 Dec 26, 2022
Deep Markov Factor Analysis (NeurIPS2021)

Deep Markov Factor Analysis (DMFA) Codes and experiments for deep Markov factor analysis (DMFA) model accepted for publication at NeurIPS2021: A. Farn

Sarah Ostadabbas 2 Dec 16, 2022
PyTorch implementation of ECCV 2020 paper "Foley Music: Learning to Generate Music from Videos "

Foley Music: Learning to Generate Music from Videos This repo holds the code for the framework presented on ECCV 2020. Foley Music: Learning to Genera

Chuang Gan 30 Nov 03, 2022
Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Official PyTorch implementation of the preprint paper "Stylized Neural Painting", accepted to CVPR 2021.

Zhengxia Zou 1.5k Dec 28, 2022
Range Image-based LiDAR Localization for Autonomous Vehicles Using Mesh Maps

Range Image-based 3D LiDAR Localization This repo contains the code for our ICRA2021 paper: Range Image-based LiDAR Localization for Autonomous Vehicl

Photogrammetry & Robotics Bonn 208 Dec 15, 2022
“英特尔创新大师杯”深度学习挑战赛 赛道3:CCKS2021中文NLP地址相关性任务

基于 bert4keras 的一个baseline 不作任何 数据trick 单模 线上 最高可到 0.7891 # 基础 版 train.py 0.7769 # transformer 各层 cls concat 明神的trick https://xv44586.git

孙永松 7 Dec 28, 2021
NeurIPS-2021: Neural Auto-Curricula in Two-Player Zero-Sum Games.

NAC Official PyTorch implementation of NAC from the paper: Neural Auto-Curricula in Two-Player Zero-Sum Games. We release code for: Gradient based ora

Xidong Feng 19 Nov 11, 2022