Code for "The Box Size Confidence Bias Harms Your Object Detector"

Overview

The Box Size Confidence Bias Harms Your Object Detector - Code

Disclaimer: This repository is for research purposes only. It is designed to maintain reproducibility of the experiments described in "The Box Size Confidence Bias Harms Your Object Detector".

Setup

Download Annotations

Download COCO2017 annotations for train, val, and tes-dev from here and move them into the folder structure like this (alternatively change the config in config/all/paths/annotations/coco_2017.yaml to your local folder structure):

 .
 └── data
   └── coco
      └── annotations
        ├── instances_train2017.json
        ├── instances_val2017.json
        └── image_info_test-dev2017.json

Generate Detections

Generate detections on the train, val, and test-dev COCO2017 set, save them in the COCO file format as JSON files. Move detections to data/detections/MODEL_NAME, see config/all/detections/default_all.yaml for all the used detectors and to add other detectors.
The official implementations for the used detectors are:

Examples

CenterNet (Hourglass)

To generate the Detections for CenterNet with Hourglass backbone first follow the installation instructions. Then download ctdet_coco_hg.pth to /models from the official source Then generate the detections from the /src folder:

test_train.py python3 test_train.py ctdet --arch hourglass --exp_id Centernet_HG_train --dataset coco --load_model ../models/ctdet_coco_hg.pth ">
# On val
python3 test.py ctdet --arch hourglass --exp_id Centernet_HG_val --dataset coco --load_model ../models/ctdet_coco_hg.pth 
# On test-dev
python3 test.py ctdet --arch hourglass --exp_id Centernet_HG_test-dev --dataset coco --load_model ../models/ctdet_coco_hg.pth --trainval
# On train
sed '56s/.*/  split = "train"/' test.py > test_train.py
python3 test_train.py ctdet --arch hourglass --exp_id Centernet_HG_train --dataset coco --load_model ../models/ctdet_coco_hg.pth

The scaling for TTA is set via the "--test_scales LIST_SCALES" flag. So to generate only the 0.5x-scales: --test_scales 0.5

RetinaNet with MMDetection

To generate the de detection files using mmdet, first follow the installation instructions. Then download specific model weights, in this example retinanet_x101_64x4d_fpn_2x_coco_20200131-bca068ab.pth to PATH_TO_DOWNLOADED_WEIGHTS and execute the following commands:

python3 tools/test.py configs/retinanet/retinanet_x101_64x4d_fpn_2x_coco.py PATH_TO_DOWNLOADED_WEIGHTS/retinanet_x101_64x4d_fpn_2x_coco_20200131-bca068ab.pth  --eval bbox --eval-options jsonfile_prefix='PATH_TO_THIS_REPO/detections/retinanet_x101_64x4d_fpn_2x/train2017' --cfg-options data.test.img_prefix='PATH_TO_COCO_IMGS/train2017' data.test.ann_file='PATH_TO_COCO_ANNS/annotations/instances_train2017.json'
python3 tools/test.py configs/retinanet/retinanet_x101_64x4d_fpn_2x_coco.py PATH_TO_DOWNLOADED_WEIGHTS/retinanet_x101_64x4d_fpn_2x_coco_20200131-bca068ab.pth  --eval bbox --eval-options jsonfile_prefix='PATH_TO_THIS_REPO/detections/retinanet_x101_64x4d_fpn_2x/val2017' --cfg-options data.test.img_prefix='PATH_TO_COCO_IMGS/val2017' data.test.ann_file='PATH_TO_COCO_ANNS/annotations/instances_val2017.json'
python3 tools/test.py configs/retinanet/retinanet_x101_64x4d_fpn_2x_coco.py PATH_TO_DOWNLOADED_WEIGHTS/retinanet_x101_64x4d_fpn_2x_coco_20200131-bca068ab.pth  --eval bbox --eval-options jsonfile_prefix='PATH_TO_THIS_REPO/detections/retinanet_x101_64x4d_fpn_2x/test-dev2017' --cfg-options data.test.img_prefix='PATH_TO_COCO_IMGS/test2017' data.test.ann_file='PATH_TO_COCO_ANNS/annotations/image_info_test-dev2017.json'

Install Dependencies

pip3 install -r requirements.txt
Optional Dependencies
# Faster coco evaluation (used if available)
pip3 install fast_coco_eval
# Parallel multi-runs, if enough RAM is available (add "hydra/launcher=joblib" to every command with -m flag)
pip install hydra-joblib-launcher

Experiments

Most of the experiments are performed using the CenterNet(HG) detections to change the detector add detections=OTHER_DETECTOR, with the location of OTHER_DETECTORs detections specified in config/all/detections/default_all.yaml. The results of each experiment are saved to outputs/EXPERIMENT/DATE and multirun/EXPERIMENT/DATE in the case of a multirun (-m flag).

Figure 2: Calibration curve of histogram binning and modified version

# original histogram binning calibration curve
python3 create_plots.py -cn plot_org_hist_bin
# modified histogram binning calibration curve:
python3 create_plots.py -cn plot_mod_hist_bin

Table 1: Ablation of histogram binning modifications

python3 calibrate.py -cn ablate_modified_hist 

Table 2: Ablation of optimization metrics of calibration on validation split

python3 calibrate.py -cn ablate_metrics  "seed=range(4,14)" -m

Figure 3: Bounding box size bias on train and val data detections

Plot of calibration curve:

# on validation data
python3 create_plots.py -cn plot_miscal name="plot_miscal_val" split="val"
# on train data:
python3 create_plots.py -cn plot_miscal name="plot_miscal_train" split="train" calib.conf_bins=20

Table 3: Ablation of optimization metrics of calibration on training data

python3 calibrate.py -cn explore_train

Table 4: Effect of individual calibration on TTA

  1. Generate detections (on train and val split) for each scale-factor individually (CenterNet_HG_TTA_050, CenterNet_HG_TTA_075, CenterNet_HG_TTA_100, CenterNet_HG_TTA_125, CenterNet_HG_TTA_150) and for complete TTA (CenterNet_HG_TTA_ens)

  2. Generate individually calibrated detections..

    python3 calibrate.py -cn calibrate_train name="calibrate_train_tta" detector="CenterNet_HG_TTA_050","CenterNet_HG_TTA_075","CenterNet_HG_TTA_100","CenterNet_HG_TTA_125","CenterNet_HG_TTA_150","CenterNet_HG_TTA_ens" -m
  3. Copy calibrated detections from multirun/calibrate_train_tta/DATE/MODEL_NAME/quantile_spline_ontrain_opt_tradeoff_full/val/MODEL_NAME.json to data/calibrated/MODEL_NAME/val/results.json for MODEL_NAME in (CenterNet_HG_TTA_050, CenterNet_HG_TTA_075, CenterNet_HG_TTA_100, CenterNet_HG_TTA_125, CenterNet_HG_TTA_150).

  4. Generate TTA of calibrated detections

    python3 enseble.py -cn enseble

Figure 4: Ablation of IoU threshold

python3 calibrate.py -cn calibrate_train name="ablate_iou" "iou_threshold=range(0.5,0.96,0.05)" -m

Table 5: Calibration method on different model

python3 calibrate.py -cn calibrate_train name="calibrate_all_models" detector=LIST_ALL_MODELS -m

The test-dev predictions are found in multirun/calibrate_all_models/DATE/MODEL_NAME/quantile_spline_ontrain_opt_tradeoff_full/test/MODEL_NAME.json and can be evaluated using the official evaluation sever.

Supplementary Material

A.Figure 5 & 6: Performance Change for Extended Optimization Metrics

python3 calibrate.py -cn ablate_metrics_extended  "seed=range(4,14)" -m

A.Table 6: Influence of parameter search spaces on performance gain

# Results for B0, C0
python3 calibrate.py -cn calibrate_train
# Results for B0, C1
python3 calibrate.py -cn calibrate_train_larger_cbins
# Results for B0 union B1, C0
python3 calibrate.py -cn calibrate_train_larger_bbins
# Results for B0 union B1, C0 union C1
python3 calibrate.py -cn calibrate_train_larger_cbbins

A.Table 7: Influence of calibration method on different sized versions of EfficientDet

python3 calibrate.py -cn calibrate_train name="influence_modelsize" detector="Efficientdet_D0","Efficientdet_D1","Efficientdet_D2","Efficientdet_D3","Efficientdet_D4","Efficientdet_D5","Efficientdet_D6","Efficientdet_D7" -m
You might also like...
Code to use Augmented Shapiro Wilks Stopping, as well as code for the paper "Statistically Signifigant Stopping of Neural Network Training"

This codebase is being actively maintained, please create and issue if you have issues using it Basics All data files are included under losses and ea

Opinionated code formatter, just like Python's black code formatter but for Beancount

beancount-black Opinionated code formatter, just like Python's black code formatter but for Beancount Try it out online here Features MIT licensed - b

a delightful machine learning tool that allows you to train, test and use models without writing code
a delightful machine learning tool that allows you to train, test and use models without writing code

igel A delightful machine learning tool that allows you to train/fit, test and use models without writing code Note I'm also working on a GUI desktop

Pytorch Lightning code guideline for conferences

Deep learning project seed Use this seed to start new deep learning / ML projects. Built in setup.py Built in requirements Examples with MNIST Badges

Automatically Build Multiple ML Models with a Single Line of Code. Created by Ram Seshadri. Collaborators Welcome. Permission Granted upon Request.
Automatically Build Multiple ML Models with a Single Line of Code. Created by Ram Seshadri. Collaborators Welcome. Permission Granted upon Request.

Auto-ViML Automatically Build Variant Interpretable ML models fast! Auto_ViML is pronounced "auto vimal" (autovimal logo created by Sanket Ghanmare) N

Code samples for my book "Neural Networks and Deep Learning"

Code samples for "Neural Networks and Deep Learning" This repository contains code samples for my book on "Neural Networks and Deep Learning". The cod

Code for: https://berkeleyautomation.github.io/bags/

DeformableRavens Code for the paper Learning to Rearrange Deformable Cables, Fabrics, and Bags with Goal-Conditioned Transporter Networks. Here is the

Code for our method RePRI for Few-Shot Segmentation. Paper at http://arxiv.org/abs/2012.06166
Code for our method RePRI for Few-Shot Segmentation. Paper at http://arxiv.org/abs/2012.06166

Region Proportion Regularized Inference (RePRI) for Few-Shot Segmentation In this repo, we provide the code for our paper : "Few-Shot Segmentation Wit

Applications using the GTN library and code to reproduce experiments in "Differentiable Weighted Finite-State Transducers"

gtn_applications An applications library using GTN. Current examples include: Offline handwriting recognition Automatic speech recognition Installing

Owner
Johannes G.
Johannes G.
Container : Context Aggregation Network

Container : Context Aggregation Network If you use this code for a paper please cite: @article{gao2021container, title={Container: Context Aggregati

AI2 47 Dec 16, 2022
Runtime type annotations for the shape, dtype etc. of PyTorch Tensors.

torchtyping Type annotations for a tensor's shape, dtype, names, ... Turn this: def batch_outer_product(x: torch.Tensor, y: torch.Tensor) - torch.Ten

Patrick Kidger 1.2k Jan 03, 2023
This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras)

Yogi-Optimizer_Keras This is an implementation of Googles Yogi-Optimizer in Keras (tf.keras) The NeurIPS-Paper can be found here: http://papers.nips.c

14 Sep 13, 2022
Extracts data from the database for a graph-node and stores it in parquet files

subgraph-extractor Extracts data from the database for a graph-node and stores it in parquet files Installation For developing, it's recommended to us

Cardstack 0 Jan 10, 2022
Bayesian Optimization using GPflow

Note: This package is for use with GPFlow 1. For Bayesian optimization using GPFlow 2 please see Trieste, a joint effort with Secondmind. GPflowOpt GP

GPflow 257 Dec 26, 2022
A Keras implementation of YOLOv3 (Tensorflow backend)

keras-yolo3 Introduction A Keras implementation of YOLOv3 (Tensorflow backend) inspired by allanzelener/YAD2K. Quick Start Download YOLOv3 weights fro

7.1k Jan 03, 2023
KaziText is a tool for modelling common human errors.

KaziText KaziText is a tool for modelling common human errors. It estimates probabilities of individual error types (so called aspects) from grammatic

ÚFAL 3 Nov 24, 2022
Code for BMVC2021 paper "Boundary Guided Context Aggregation for Semantic Segmentation"

Boundary-Guided-Context-Aggregation Boundary Guided Context Aggregation for Semantic Segmentation Haoxiang Ma, Hongyu Yang, Di Huang In BMVC'2021 Pape

Haoxiang Ma 31 Jan 08, 2023
VGGFace2-HQ - A high resolution face dataset for face editing purpose

The first open source high resolution dataset for face swapping!!! A high resolution version of VGGFace2 for academic face editing purpose

Naiyuan Liu 232 Dec 29, 2022
Reinforcement learning library(framework) designed for PyTorch, implements DQN, DDPG, A2C, PPO, SAC, MADDPG, A3C, APEX, IMPALA ...

Automatic, Readable, Reusable, Extendable Machin is a reinforcement library designed for pytorch. Build status Platform Status Linux Windows Supported

Iffi 348 Dec 24, 2022
CIFAR-10_train-test - training and testing codes for dataset CIFAR-10

CIFAR-10_train-test - training and testing codes for dataset CIFAR-10

Frederick Wang 3 Apr 26, 2022
PyGAD, a Python 3 library for building the genetic algorithm and training machine learning algorithms (Keras & PyTorch).

PyGAD: Genetic Algorithm in Python PyGAD is an open-source easy-to-use Python 3 library for building the genetic algorithm and optimizing machine lear

Ahmed Gad 1.1k Dec 26, 2022
Optimal Camera Position for a Practical Application of Gaze Estimation on Edge Devices,

Optimal Camera Position for a Practical Application of Gaze Estimation on Edge Devices, Linh Van Ma, Tin Trung Tran, Moongu Jeon, ICAIIC 2022 (The 4th

Linh 11 Oct 10, 2022
Image-Scaling Attacks and Defenses

Image-Scaling Attacks & Defenses This repository belongs to our publication: Erwin Quiring, David Klein, Daniel Arp, Martin Johns and Konrad Rieck. Ad

Erwin Quiring 163 Nov 21, 2022
Hummingbird compiles trained ML models into tensor computation for faster inference.

Hummingbird Introduction Hummingbird is a library for compiling trained traditional ML models into tensor computations. Hummingbird allows users to se

Microsoft 3.1k Dec 30, 2022
Quasi-Dense Similarity Learning for Multiple Object Tracking, CVPR 2021 (Oral)

Quasi-Dense Tracking This is the offical implementation of paper Quasi-Dense Similarity Learning for Multiple Object Tracking. We present a trailer th

ETH VIS Research Group 327 Dec 27, 2022
Deep Learning as a Cloud API Service.

Deep API Deep Learning as Cloud APIs. This project provides pre-trained deep learning models as a cloud API service. A web interface is available as w

Wu Han 4 Jan 06, 2023
Disease Informed Neural Networks (DINNs) — neural networks capable of learning how diseases spread, forecasting their progression, and finding their unique parameters (e.g. death rate).

DINN We introduce Disease Informed Neural Networks (DINNs) — neural networks capable of learning how diseases spread, forecasting their progression, a

19 Dec 10, 2022
Multi-label Co-regularization for Semi-supervised Facial Action Unit Recognition (NeurIPS 2019)

MLCR This is the source code for paper Multi-label Co-regularization for Semi-supervised Facial Action Unit Recognition. Xuesong Niu, Hu Han, Shiguang

Edson-Niu 60 Nov 29, 2022
Implementation for Learning to Track with Object Permanence

Learning to Track with Object Permanence A video-based MOT approach capable of tracking through full occlusions: Learning to Track with Object Permane

Toyota Research Institute - Machine Learning 91 Jan 03, 2023