Official implementation of the Neurips 2021 paper Searching Parameterized AP Loss for Object Detection.

Overview

Parameterized AP Loss

By Chenxin Tao, Zizhang Li, Xizhou Zhu, Gao Huang, Yong Liu, Jifeng Dai

This is the official implementation of the Neurips 2021 paper Searching Parameterized AP Loss for Object Detection.

Introduction

TL; DR.

Parameterized AP Loss aims to better align the network training and evaluation in object detection. It builds a unified formula for classification and localization tasks via parameterized functions, where the optimal parameters are searched automatically.

PAPLoss-intro

Introduction.

  • In evaluation of object detectors, Average Precision (AP) captures the performance of localization and classification sub-tasks simultaneously.

  • In training, due to the non-differentiable nature of the AP metric, previous methods adopt separate differentiable losses for the two sub-tasks. Such a mis-alignment issue may well lead to performance degradation.

  • Some existing works seek to design surrogate losses for the AP metric manually, which requires expertise and may still be sub-optimal.

  • In this paper, we propose Parameterized AP Loss, where parameterized functions are introduced to substitute the non-differentiable components in the AP calculation. Different AP approximations are thus represented by a family of parameterized functions in a unified formula. Automatic parameter search algorithm is then employed to search for the optimal parameters. Extensive experiments on the COCO benchmark demonstrate that the proposed Parameterized AP Loss consistently outperforms existing handcrafted losses.

PAPLoss-overview

Main Results with RetinaNet

Model Loss AP config
R50+FPN Focal Loss + L1 37.5 config
R50+FPN Focal Loss + GIoU 39.2 config
R50+FPN AP Loss + L1 35.4 config
R50+FPN aLRP Loss 39.0 config
R50+FPN Parameterized AP Loss 40.5 search config
training config

Main Results with Faster-RCNN

Model Loss AP config
R50+FPN Cross Entropy + L1 39.0 config
R50+FPN Cross Entropy + GIoU 39.1 config
R50+FPN aLRP Loss 40.7 config
R50+FPN AutoLoss-Zero 39.3 -
R50+FPN CSE-AutoLoss-A 40.4 -
R50+FPN Parameterized AP Loss 42.0 search config
training config

Installation

Our implementation is based on MMDetection and aLRPLoss, thanks for their codes!

Requirements

  • Linux or macOS
  • Python 3.6+
  • PyTorch 1.3+
  • CUDA 9.2+
  • GCC 5+
  • mmcv

Recommended configuration: Python 3.7, PyTorch 1.7, CUDA 10.1.

Install mmdetection with Parameterized AP Loss

a. create a conda virtual environment and activate it.

conda create -n paploss python=3.7 -y
conda activate paploss

b. install pytorch and torchvision following official instructions.

conda install pytorch=1.7.0 torchvision=0.8.0 cudatoolkit=10.1 -c pytorch

c. intall mmcv following official instruction. We recommend installing the pre-built mmcv-full. For example, if your CUDA version is 10.1 and pytorch version is 1.7.0, you could run:

pip install mmcv-full -f https://download.openmmlab.com/mmcv/dist/cu101/torch1.7.0/index.html

d. clone the repository.

git clone https://github.com/fundamentalvision/Parameterized-AP-Loss.git
cd Parameterized-AP-Loss

e. Install build requirements and then install mmdetection with Parameterized AP Loss. (We install our forked version of pycocotools via the github repo instead of pypi for better compatibility with our repo.)

pip install -r requirements/build.txt
pip install -v -e .  # or "python setup.py develop"

Usage

Dataset preparation

Please follow the official guide of mmdetection to organize the datasets. Note that we split the original training set into search training and validation sets with this split tool. The recommended data structure is as follows:

Parameterized-AP-Loss
├── mmdet
├── tools
├── configs
└── data
    └── coco
        ├── annotations
        |   ├── search_train2017.json
        |   ├── search_val2017.json
        |   ├── instances_train2017.json
        |   └── instances_val2017.json
        ├── train2017
        ├── val2017
        └── test2017

Searching for Parameterized AP Loss

The search command format is

./tools/dist_search.sh {CONFIG_NAME} {NUM_GPUS}

For example, the command for searching for RetinaNet with 8 GPUs is as follows:

./tools/dist_search.sh ./search_configs/cfg_search_retina.py 8

Training models with the provided parameters

After searching, copy the optimal parameters into the provided training config. We have also provided a set of parameters searched by us.

The re-training command format is

./tools/dist_train.sh {CONFIG_NAME} {NUM_GPUS}

For example, the command for training RetinaNet with 8 GPUs is as follows:

./tools/dist_train.sh ./configs/paploss/paploss_retinanet_r50_fpn.py 8

License

This project is released under the Apache 2.0 license.

Citing Parameterzied AP Loss

If you find Parameterized AP Loss useful in your research, please consider citing:

@inproceedings{tao2021searching,
  title={Searching Parameterized AP Loss for Object Detection},
  author={Tao, Chenxin and Li, Zizhang and Zhu, Xizhou and Huang, Gao and Liu, Yong and Dai, Jifeng},
  booktitle={Thirty-Fifth Conference on Neural Information Processing Systems},
  year={2021}
}
Data & Code for ACCENTOR Adding Chit-Chat to Enhance Task-Oriented Dialogues

ACCENTOR: Adding Chit-Chat to Enhance Task-Oriented Dialogues Overview ACCENTOR consists of the human-annotated chit-chat additions to the 23.8K dialo

Facebook Research 69 Dec 29, 2022
Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).

Code for "Training Neural Networks with Fixed Sparse Masks" (NeurIPS 2021).

Varun Nair 37 Dec 30, 2022
Bottom-up attention model for image captioning and VQA, based on Faster R-CNN and Visual Genome

bottom-up-attention This code implements a bottom-up attention model, based on multi-gpu training of Faster R-CNN with ResNet-101, using object and at

Peter Anderson 1.3k Jan 09, 2023
Official code for article "Expression is enough: Improving traffic signal control with advanced traffic state representation"

1 Introduction Official code for article "Expression is enough: Improving traffic signal control with advanced traffic state representation". The code s

Liang Zhang 10 Dec 10, 2022
An open source machine learning library for performing regression tasks using RVM technique.

Introduction neonrvm is an open source machine learning library for performing regression tasks using RVM technique. It is written in C programming la

Siavash Eliasi 33 May 31, 2022
EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation

EFENet EFENet: Reference-based Video Super-Resolution with Enhanced Flow Estimation Code is a bit messy now. I woud clean up soon. For training the EF

Yaping Zhao 19 Nov 05, 2022
Data manipulation and transformation for audio signal processing, powered by PyTorch

torchaudio: an audio library for PyTorch The aim of torchaudio is to apply PyTorch to the audio domain. By supporting PyTorch, torchaudio follows the

1.9k Dec 28, 2022
Fibonacci Method Gradient Descent

An implementation of the Fibonacci method for gradient descent, featuring a TKinter GUI for inputting the function / parameters to be examined and a matplotlib plot of the function and results.

Emma 1 Jan 28, 2022
PyTorch reimplementation of REALM and ORQA

PyTorch reimplementation of REALM and ORQA

Li-Huai (Allan) Lin 17 Aug 20, 2022
PAMI stands for PAttern MIning. It constitutes several pattern mining algorithms to discover interesting patterns in transactional/temporal/spatiotemporal databases

Introduction PAMI stands for PAttern MIning. It constitutes several pattern mining algorithms to discover interesting patterns in transactional/tempor

RAGE UDAY KIRAN 43 Jan 08, 2023
Finding an Unsupervised Image Segmenter in each of your Deep Generative Models

Finding an Unsupervised Image Segmenter in each of your Deep Generative Models Description Recent research has shown that numerous human-interpretable

Luke Melas-Kyriazi 61 Oct 17, 2022
VLGrammar: Grounded Grammar Induction of Vision and Language

VLGrammar: Grounded Grammar Induction of Vision and Language

Yining Hong 27 Dec 23, 2022
Vector Quantization, in Pytorch

Vector Quantization - Pytorch A vector quantization library originally transcribed from Deepmind's tensorflow implementation, made conveniently into a

Phil Wang 665 Jan 08, 2023
Pytorch implementation AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative Adversarial Networks

AttnGAN Pytorch implementation for reproducing AttnGAN results in the paper AttnGAN: Fine-Grained Text to Image Generation with Attentional Generative

Tao Xu 1.2k Dec 26, 2022
IDA file loader for UF2, created for the DEFCON 29 hardware badge

UF2 Loader for IDA The DEFCON 29 badge uses the UF2 bootloader, which conveniently allows you to dump and flash the firmware over USB as a mass storag

Kevin Colley 6 Feb 08, 2022
Official implementation of "Synthetic Temporal Anomaly Guided End-to-End Video Anomaly Detection" (ICCV Workshops 2021: RSL-CV).

Official PyTorch implementation of "Synthetic Temporal Anomaly Guided End-to-End Video Anomaly Detection" This is the implementation of the paper "Syn

Marcella Astrid 11 Oct 07, 2022
Data visualization app for H&M competition in kaggle

handm_data_visualize_app Data visualization app by streamlit for H&M competition in kaggle. competition page: https://www.kaggle.com/competitions/h-an

Kyohei Uto 12 Apr 30, 2022
Code for Domain Adaptive Video Segmentation via Temporal Consistency Regularization in ICCV 2021

Domain Adaptive Video Segmentation via Temporal Consistency Regularization Updates 08/2021: check out our domain adaptation for sematic segmentation p

36 Dec 12, 2022
This repository will be a summary and outlook on all our open, medical, AI advancements.

medical by LAION This repository will be a summary and outlook on all our open, medical, AI advancements. See the medical-general channel in the medic

LAION AI 18 Dec 30, 2022
GANSketchingJittor - Implementation of Sketch Your Own GAN in Jittor

GANSketching in Jittor Implementation of (Sketch Your Own GAN) in Jittor(计图). Or

Bernard Tan 10 Jul 02, 2022