Official PyTorch Implementation of Rank & Sort Loss [ICCV2021]

Overview

Rank & Sort Loss for Object Detection and Instance Segmentation

The official implementation of Rank & Sort Loss. Our implementation is based on mmdetection.

Rank & Sort Loss for Object Detection and Instance Segmentation,
Kemal Oksuz, Baris Can Cam, Emre Akbas, Sinan Kalkan, ICCV 2021 (Oral Presentation). (arXiv pre-print)

Summary

What is Rank & Sort (RS) Loss? Rank & Sort (RS) Loss supervises object detectors and instance segmentation methods to (i) rank the scores of the positive anchors above those of negative anchors, and at the same time (ii) sort the scores of the positive anchors with respect to their localisation qualities.

Benefits of RS Loss on Simplification of Training. With RS Loss, we significantly simplify training: (i) Thanks to our sorting objective, the positives are prioritized by the classifier without an additional auxiliary head (e.g. for centerness, IoU, mask-IoU), (ii) due to its ranking-based nature, RS Loss is robust to class imbalance, and thus, no sampling heuristic is required, and (iii) we address the multi-task nature of visual detectors using tuning-free task-balancing coefficients.

Benefits of RS Loss on Improving Performance. Using RS Loss, we train seven diverse visual detectors only by tuning the learning rate, and show that it consistently outperforms baselines: e.g. our RS Loss improves (i) Faster R-CNN by ~3 box AP and aLRP Loss (ranking-based baseline) by ~2 box AP on COCO dataset, (ii) Mask R-CNN with repeat factor sampling by 3.5 mask AP (~7 AP for rare classes) on LVIS dataset.

How to Cite

Please cite the paper if you benefit from our paper or the repository:

@inproceedings{RSLoss,
       title = {Rank & Sort Loss for Object Detection and Instance Segmentation},
       author = {Kemal Oksuz and Baris Can Cam and Emre Akbas and Sinan Kalkan},
       booktitle = {International Conference on Computer Vision (ICCV)},
       year = {2021}
}

Specification of Dependencies and Preparation

  • Please see get_started.md for requirements and installation of mmdetection.
  • Please refer to introduction.md for dataset preparation and basic usage of mmdetection.

Trained Models

Here, we report minival results in terms of AP and oLRP.

Multi-stage Object Detection

RS-R-CNN

Backbone Epoch Carafe MS train box AP box oLRP Log Config Model
ResNet-50 12 39.6 67.9 log config model
ResNet-50 12 + 40.8 66.9 log config model
ResNet-101-DCN 36 [480,960] 47.6 61.1 log config model
ResNet-101-DCN 36 + [480,960] 47.7 60.9 log config model

RS-Cascade R-CNN

Backbone Epoch box AP box oLRP Log Config Model
ResNet-50 12 41.3 66.6 Coming soon

One-stage Object Detection

Method Backbone Epoch box AP box oLRP Log Config Model
RS-ATSS ResNet-50 12 39.9 67.9 log config model
RS-PAA ResNet-50 12 41.0 67.3 log config model

Multi-stage Instance Segmentation

RS-Mask R-CNN on COCO Dataset

Backbone Epoch Carafe MS train mask AP box AP mask oLRP box oLRP Log Config Model
ResNet-50 12 36.4 40.0 70.1 67.5 log config model
ResNet-50 12 + 37.3 41.1 69.4 66.6 log config model
ResNet-101 36 [640,800] 40.3 44.7 66.9 63.7 log config model
ResNet-101 36 + [480,960] 41.5 46.2 65.9 62.6 log config model
ResNet-101-DCN 36 + [480,960] 43.6 48.8 64.0 60.2 log config model
ResNeXt-101-DCN 36 + [480,960] 44.4 49.9 63.1 59.1 Coming Soon config model

RS-Mask R-CNN on LVIS Dataset

Backbone Epoch MS train mask AP box AP mask oLRP box oLRP Log Config Model
ResNet-50 12 [640,800] 25.2 25.9 Coming Soon Coming Soon Coming Soon Coming soon Coming soon

One-stage Instance Segmentation

RS-YOLACT

Backbone Epoch mask AP box AP mask oLRP box oLRP Log Config Model
ResNet-50 55 29.9 33.8 74.7 71.8 log config model

RS-SOLOv2

Backbone Epoch mask AP mask oLRP Log Config Model
ResNet-34 36 32.6 72.7 Coming soon Coming soon Coming soon
ResNet-101 36 39.7 66.9 Coming soon Coming soon Coming soon

Running the Code

Training Code

The configuration files of all models listed above can be found in the configs/ranksort_loss folder. You can follow get_started.md for training code. As an example, to train Faster R-CNN with our RS Loss on 4 GPUs as we did, use the following command:

./tools/dist_train.sh configs/ranksort_loss/ranksort_faster_rcnn_r50_fpn_1x_coco.py 4

Test Code

The configuration files of all models listed above can be found in the configs/ranksort_loss folder. You can follow get_started.md for test code. As an example, first download a trained model using the links provided in the tables below or you train a model, then run the following command to test an object detection model on multiple GPUs:

./tools/dist_test.sh configs/ranksort_loss/ranksort_faster_rcnn_r50_fpn_1x_coco.py ${CHECKPOINT_FILE} 4 --eval bbox 

and use the following command to test an instance segmentation model on multiple GPUs:

./tools/dist_test.sh configs/ranksort_loss/ranksort_mask_rcnn_r50_fpn_1x_coco.py ${CHECKPOINT_FILE} 4 --eval bbox segm 

You can also test a model on a single GPU with the following example command:

python tools/test.py configs/ranksort_loss/ranksort_faster_rcnn_r50_fpn_1x_coco.py ${CHECKPOINT_FILE} 4 --eval bbox 

Details for Rank & Sort Loss Implementation

Below is the links to the files that can be useful to check out the details of the implementation:

Owner
Kemal Oksuz
Kemal Oksuz
MMRazor: a model compression toolkit for model slimming and AutoML

Documentation: https://mmrazor.readthedocs.io/ English | 简体中文 Introduction MMRazor is a model compression toolkit for model slimming and AutoML, which

OpenMMLab 899 Jan 02, 2023
A stock generator that assess a list of stocks and returns the best stocks for investing and money allocations based on users choices of volatility, duration and number of stocks

Stock-Generator Please visit "Stock Generator.ipynb" for a clearer view and "Stock Generator.py" for scripts. The stock generator is designed to allow

jmengnyay 1 Aug 02, 2022
Parameterising Simulated Annealing for the Travelling Salesman Problem

Parameterising Simulated Annealing for the Travelling Salesman Problem

Gary Sun 55 Jun 15, 2022
Flax is a neural network ecosystem for JAX that is designed for flexibility.

Flax: A neural network library and ecosystem for JAX designed for flexibility Overview | Quick install | What does Flax look like? | Documentation See

Google 3.9k Jan 02, 2023
Hardware accelerated, batchable and differentiable optimizers in JAX.

JAXopt Installation | Examples | References Hardware accelerated (GPU/TPU), batchable and differentiable optimizers in JAX. Installation JAXopt can be

Google 621 Jan 08, 2023
A list of multi-task learning papers and projects.

This page contains a list of papers on multi-task learning for computer vision. Please create a pull request if you wish to add anything. If you are interested, consider reading our recent survey pap

svandenh 297 Dec 17, 2022
Mix3D: Out-of-Context Data Augmentation for 3D Scenes (3DV 2021)

Mix3D: Out-of-Context Data Augmentation for 3D Scenes (3DV 2021) Alexey Nekrasov*, Jonas Schult*, Or Litany, Bastian Leibe, Francis Engelmann Mix3D is

Alexey Nekrasov 189 Dec 26, 2022
Iterative Training: Finding Binary Weight Deep Neural Networks with Layer Binarization

Iterative Training: Finding Binary Weight Deep Neural Networks with Layer Binarization This repository contains the source code for the paper (link wi

Rakuten Group, Inc. 0 Nov 19, 2021
Complete U-net Implementation with keras

U Net Lowered with Keras Complete U-net Implementation with keras Original Paper Link : https://arxiv.org/abs/1505.04597 Special Implementations : The

Sagnik Roy 14 Oct 10, 2022
Code and models used in "MUSS Multilingual Unsupervised Sentence Simplification by Mining Paraphrases".

Multilingual Unsupervised Sentence Simplification Code and pretrained models to reproduce experiments in "MUSS: Multilingual Unsupervised Sentence Sim

Facebook Research 81 Dec 29, 2022
Weakly- and Semi-Supervised Panoptic Segmentation (ECCV18)

Weakly- and Semi-Supervised Panoptic Segmentation by Qizhu Li*, Anurag Arnab*, Philip H.S. Torr This repository demonstrates the weakly supervised gro

Qizhu Li 159 Dec 20, 2022
simple demo codes for Learning to Teach with Dynamic Loss Functions

Learning to Teach with Dynamic Loss Functions This repo contains the simple demo for the NeurIPS-18 paper: Learning to Teach with Dynamic Loss Functio

Lijun Wu 15 Dec 30, 2021
Omniscient Video Super-Resolution

Omniscient Video Super-Resolution This is the official code of OVSR (Omniscient Video Super-Resolution, ICCV 2021). This work is based on PFNL. Datase

36 Oct 27, 2022
Code for CVPR2021 paper "Learning Salient Boundary Feature for Anchor-free Temporal Action Localization"

AFSD: Learning Salient Boundary Feature for Anchor-free Temporal Action Localization This is an official implementation in PyTorch of AFSD. Our paper

Tencent YouTu Research 146 Dec 24, 2022
This repository contains the code for TABS, a 3D CNN-Transformer hybrid automated brain tissue segmentation algorithm using T1w structural MRI scans

This repository contains the code for TABS, a 3D CNN-Transformer hybrid automated brain tissue segmentation algorithm using T1w structural MRI scans. TABS relies on a Res-Unet backbone, with a Vision

6 Nov 07, 2022
Membership Inference Attack against Graph Neural Networks

MIA GNN Project Starter If you meet the version mismatch error for Lasagne library, please use following command to upgrade Lasagne library. pip insta

6 Nov 09, 2022
Code accompanying paper: Meta-Learning to Improve Pre-Training

Meta-Learning to Improve Pre-Training This folder contains code to run experiments in the paper Meta-Learning to Improve Pre-Training, NeurIPS 2021. P

28 Dec 31, 2022
Hierarchical Aggregation for 3D Instance Segmentation (ICCV 2021)

HAIS Hierarchical Aggregation for 3D Instance Segmentation (ICCV 2021) by Shaoyu Chen, Jiemin Fang, Qian Zhang, Wenyu Liu, Xinggang Wang*. (*) Corresp

Hust Visual Learning Team 145 Jan 05, 2023
An imperfect information game is a type of game with asymmetric information

DecisionHoldem An imperfect information game is a type of game with asymmetric information. Compared with perfect information game, imperfect informat

Decision AI 25 Dec 23, 2022
learned_optimization: Training and evaluating learned optimizers in JAX

learned_optimization: Training and evaluating learned optimizers in JAX learned_optimization is a research codebase for training learned optimizers. I

Google 533 Dec 30, 2022