A curated list of awesome deep long-tailed learning resources.

Overview

Awesome Long-Tailed Learning

A curated list of awesome deep long-tailed learning resources. We recently released Deep Long-Tailed Learning: A Survey to the community. In this survey, we reviewed recent advances in long-tailed learning based on deep neural networks.

Specifically, existing long-tailed learning studies can be grouped into three main categories (i.e., class re-balancing, information augmentation and module improvement), which can be further classified into nine sub-categories (as shown in the below figure). We also empirically analyzed several state-of-the-art methods by evaluating to what extent they address the issue of class imbalance. We concluded the survey by highlighting important applications of deep long-tailed learning and identifying several promising directions for future research. After completing this survey, we decided to release the collected long-tailed learning resources, hoping to push the development of the community. If you have any questions or suggestions, please feel free to contact us.

1. Type of Long-tailed Learning

Symbol Sampling CSL LA TL Aug
Type Re-sampling Cost-sensitive Learning Logit Adjustment Transfer Learning Data Augmentation
Symbol RL CD DT Ensemble other
Type Representation Learning Classifier Design Decoupled Training Ensemble Learning Other Types

2. Top-tier Conference Papers

2021

Title Venue Year Type Code
Improving contrastive learning on imbalanced seed data via open-world sampling NeurIPS 2021 Sampling,TL, DC Official
Semi-supervised semantic segmentation via adaptive equalization learning NeurIPS 2021 Sampling,CSL,TL, Aug Official
On model calibration for long-tailed object detection and instance segmentation NeurIPS 2021 LA Official
Label-imbalanced and group-sensitive classification under overparameterization NeurIPS 2021 LA
Towards calibrated model for long-tailed visual recognition from prior perspective NeurIPS 2021 Aug, RL Official
Supercharging imbalanced data learning with energy-based contrastive representation transfer NeurIPS 2021 Aug, TL, RL Official
VideoLT: Large-scale long-tailed video recognition ICCV 2021 Sampling Official
Exploring classification equilibrium in long-tailed object detection ICCV 2021 Sampling,CSL Official
GistNet: a geometric structure transfer network for long-tailed recognition ICCV 2021 Sampling,TL, DC
FASA: Feature augmentation and sampling adaptation for long-tailed instance segmentation ICCV 2021 Sampling,CSL
ACE: Ally complementary experts for solving long-tailed recognition in one-shot ICCV 2021 Sampling,Ensemble Official
Influence-Balanced Loss for Imbalanced Visual Classification ICCV 2021 CSL Official
Re-distributing biased pseudo labels for semi-supervised semantic segmentation: A baseline investigation ICCV 2021 TL Official
Self supervision to distillation for long-tailed visual recognition ICCV 2021 TL Official
Distilling virtual examples for long-tailed recognition ICCV 2021 TL
MosaicOS: A simple and effective use of object-centric images for long-tailed object detection ICCV 2021 TL Official
Parametric contrastive learning ICCV 2021 RL Official
Distributional robustness loss for long-tail learning ICCV 2021 RL Official
Learning of visual relations: The devil is in the tails ICCV 2021 DT
Image-Level or Object-Level? A Tale of Two Resampling Strategies for Long-Tailed Detection ICML 2021 Sampling Official
Delving into deep imbalanced regression ICML 2021 Other Official
Long-tailed multi-label visual recognition by collaborative training on uniform and re-balanced samplings CVPR 2021 Sampling,Ensemble
Equalization loss v2: A new gradient balance approach for long-tailed object detection CVPR 2021 CSL Official
Seesaw loss for long-tailed instance segmentation CVPR 2021 CSL Official
Adaptive class suppression loss for long-tail object detection CVPR 2021 CSL Official
PML: Progressive margin loss for long-tailed age classification CVPR 2021 CSL
Disentangling label distribution for long-tailed visual recognition CVPR 2021 CSL,LA Official
Adversarial robustness under long-tailed distribution CVPR 2021 CSL,LA,CD Official
Distribution alignment: A unified framework for long-tail visual recognition CVPR 2021 CSL,LA,DT Official
Improving calibration for long-tailed recognition CVPR 2021 CSL,Aug,DT Official
CReST: A classrebalancing self-training framework for imbalanced semi-supervised learning CVPR 2021 TL Official
Conceptual 12M: Pushing web-scale image-text pre-training to recognize long-tail visual concepts CVPR 2021 TL Official
RSG: A simple but effective module for learning imbalanced datasets CVPR 2021 TL,Aug Official
MetaSAug: Meta semantic augmentation for long-tailed visual recognition CVPR 2021 Aug Official
Contrastive learning based hybrid networks for long-tailed image classification CVPR 2021 RL
Unsupervised discovery of the long-tail in instance segmentation using hierarchical self-supervision CVPR 2021 RL
Long-tail learning via logit adjustment ICLR 2021 LA Official
Long-tailed recognition by routing diverse distribution-aware experts ICLR 2021 TL,Ensemble Official
Exploring balanced feature spaces for representation learning ICLR 2021 RL,DT

2020

Title Venue Year Type Code
Balanced meta-softmax for long-taield visual recognition NeurIPS 2020 Sampling,CSL Official
Posterior recalibration for imbalanced datasets NeurIPS 2020 LA Official
Long-tailed classification by keeping the good and removing the bad momentum causal effect NeurIPS 2020 LA,CD Official
Rethinking the value of labels for improving classimbalanced learning NeurIPS 2020 TL,RA Official
The devil is in classification: A simple framework for long-tail instance segmentation ECCV 2020 Sampling,DT,Ensemble Official
Imbalanced continual learning with partitioning reservoir sampling ECCV 2020 Sampling Official
Distribution-balanced loss for multi-label classification in long-tailed datasets ECCV 2020 CSL Official
Feature space augmentation for long-tailed data ECCV 2020 TL,Aug,DT
Learning from multiple experts: Self-paced knowledge distillation for long-tailed classification ECCV 2020 TL,Ensemble Official
Solving long-tailed recognition with deep realistic taxonomic classifier ECCV 2020 CD Official
Learning to segment the tail CVPR 2020 Sampling,TL Official
BBN: Bilateral-branch network with cumulative learning for long-tailed visual recognition CVPR 2020 Sampling,Ensemble Official
Overcoming classifier imbalance for long-tail object detection with balanced group softmax CVPR 2020 Sampling,Ensemble Official
Rethinking class-balanced methods for long-tailed visual recognition from a domain adaptation perspective CVPR 2020 CSL Official
Equalization loss for long-tailed object recognition CVPR 2020 CSL Official
Domain balancing: Face recognition on long-tailed domains CVPR 2020 CSL
M2m: Imbalanced classification via majorto-minor translation CVPR 2020 TL,Aug Official
Deep representation learning on long-tailed data: A learnable embedding augmentation perspective CVPR 2020 TL,Aug,RL
Inflated episodic memory with region self-attention for long-tailed visual recognition CVPR 2020 RL
Decoupling representation and classifier for long-tailed recognition ICLR 2020 Sampling,CSL,RL,CD,DT Official

2019

Title Venue Year Type Code
Meta-weight-net: Learning an explicit mapping for sample weighting NeurIPS 2019 CSL Official
Learning imbalanced datasets with label-distribution-aware margin loss NeurIPS 2019 CSL Official
Dynamic curriculum learning for imbalanced data classification ICCV 2019 Sampling
Class-balanced loss based on effective number of samples CVPR 2019 CSL Official
Striking the right balance with uncertainty CVPR 2019 CSL
Feature transfer learning for face recognition with under-represented data CVPR 2019 TL,Aug
Unequal-training for deep face recognition with long-tailed noisy data CVPR 2019 RL Official
Large-scale long-tailed recognition in an open world CVPR 2019 RL Official

2018

Title Venue Year Type Code
Large scale fine-grained categorization and domain-specific transfer learning CVPR 2018 TL Official

2017

Title Venue Year Type Code
Learning to model the tail NeurIPS 2017 CSL
Focal loss for dense object detection ICCV 2017 CSL
Range loss for deep face recognition with long-tailed training data ICCV 2017 RL
Class rectification hard mining for imbalanced deep learning ICCV 2017 RL

2016

Title Venue Year Type Code
Learning deep representation for imbalanced classification CVPR 2016 Sampling,RL
Factors in finetuning deep model for object detection with long-tail distribution CVPR 2016 CSL,RL

3. Benchmark Datasets

Dataset Long-tailed Task # Class # Training data # Test data
ImageNet-LT Classification 1,000 115,846 50,000
CIFAR100-LT Classification 100 50,000 10,000
Places-LT Classification 365 62,500 36,500
iNaturalist 2018 Classification 8,142 437,513 24,426
LVIS v0.5 Detection and Segmentation 1,230 57,000 20,000
LVIS v1 Detection and Segmentation 1,203 100,000 19,800
VOC-LT Multi-label Classification 20 1,142 4,952
COCO-LT Multi-label Classification 80 1,909 5,000
VideoLT Video Classification 1,004 179,352 25,622

4. Empirical Studies

(1) Long-tailed benchmarking performance

  • We evaluate several state-of-the-art methods on ImageNet-LT to see to what extent they handle class imbalance via new evaluation metrics, i.e., UA (upper bound accuracy) and RA (relative accuracy). We categorize these methods based on class re-balancing (CR), information augmentation (IA) and module improvement (MI).

  • Almost all long-tailed methods perform better than the Softmax baseline in terms of accuracy, which demonstrates the effectiveness of long-tailed learning.
  • Training with 200 epochs leads to better performance for most long-tailed methods, since sufficient training enables deep models to fit data better and learn better image representations.
  • In addition to accuracy, we also evaluate long-tailed methods based on UA and RA. For the methods that have higher UA, the performance gain comes not only from the alleviation of class imbalance, but also from other factors, like data augmentation or better network architectures. Therefore, simply using accuracy for evaluation is not accurate enough, while our proposed RA metric provides a good complement, since it alleviates the influences of factors apart from class imbalance.
  • For example, MiSLAS, based on data mixup, has higher accuracy than Balanced Sofmtax under 90 training epochs, but it also has higher UA. As a result, the relative accuracy of MiSLAS is lower than Balanced Sofmtax, which means that Balanced Sofmtax alleviates class imbalance better than MiSLAS under 90 training epochs.
  • Although some recent high-accuracy methods have lower RA, the overall development trend of long-tailed learning is still positive, as shown in the below figure.

  • The current state-of-the-art long-tailed method in terms of both accuracy and RA is TADE (ensemble-based method).

(2) More discussions on cost-sensitive losses

  • We further evaluate the performance of different cost-sensitive learning losses based on the decoupled training scheme.
  • Decoupled training, compared to joint training, can further improve the overall performance of most cost-sensitive learning methods apart from balanced softmax (BS).
  • Although BS outperofmrs other cost-sensitive losses under one-stage training, they perform comparably under decoupled training. This implies that although these cost-sensitive losses perform differently under joint training, they essentially learn similar quality of feature representations.

5. Citation

If this repository is helpful to you, please cite our survey.

@article{zhang2021deep,
  title={Deep long-tailed learning: A survey},
  author={Zhang, Yifan and Kang, Bingyi and Hooi, Bryan and Yan, Shuicheng and Feng, Jiashi},
  journal={arXiv preprint arXiv:2110.04596},
  year={2021}
}

5. Other Resources

Owner
vanint
vanint
NOMAD - A blackbox optimization software

################################################################################### #

Blackbox Optimization 78 Dec 29, 2022
基于DouZero定制AI实战欢乐斗地主

DouZero_For_Happy_DouDiZhu: 将DouZero用于欢乐斗地主实战 本项目基于DouZero 环境配置请移步项目DouZero 模型默认为WP,更换模型请修改start.py中的模型路径 运行main.py即可 SL (baselines/sl/): 基于人类数据进行深度学习

1.5k Jan 08, 2023
Unimodal Face Classification with Multimodal Training

Unimodal Face Classification with Multimodal Training This is a PyTorch implementation of the following paper: Unimodal Face Classification with Multi

Wenbin Teng 3 Jul 06, 2022
[AAAI 2022] Sparse Structure Learning via Graph Neural Networks for Inductive Document Classification

Sparse Structure Learning via Graph Neural Networks for inductive document classification Make graph dataset create co-occurrence graph for datasets.

16 Dec 22, 2022
A blender add-on that automatically re-aligns wrong axis objects.

Auto Align A blender add-on that automatically re-aligns wrong axis objects. Usage There are three options available in the 3D Viewport Sidebar It

29 Nov 25, 2022
This is the official pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering" on VQA Task

🌈 ERASOR (RA-L'21 with ICRA Option) Official page of "ERASOR: Egocentric Ratio of Pseudo Occupancy-based Dynamic Object Removal for Static 3D Point C

Hyungtae Lim 225 Dec 29, 2022
Automatic Attendance marker for LMS Practice School Division, BITS Pilani

LMS Attendance Marker Automatic script for lazy people to mark attendance on LMS for Practice School 1. Setup Add your LMS credentials and time slot t

Nihar Bansal 3 Jun 12, 2021
Source code for the paper "Periodic Traveling Waves in an Integro-Difference Equation With Non-Monotonic Growth and Strong Allee Effect"

Source code for the paper "Periodic Traveling Waves in an Integro-Difference Equation With Non-Monotonic Growth and Strong Allee Effect" by Michael Ne

M Nestor 1 Apr 19, 2022
Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021)

Reducing Information Bottleneck for Weakly Supervised Semantic Segmentation (NeurIPS 2021) The implementation of Reducing Infromation Bottleneck for W

Jungbeom Lee 81 Dec 16, 2022
The fastest way to visualize GradCAM with your Keras models.

VizGradCAM VizGradCam is the fastest way to visualize GradCAM in Keras models. GradCAM helps with providing visual explainability of trained models an

58 Nov 19, 2022
Reproducible research and reusable acyclic workflows in Python. Execute code on HPC systems as if you executed them on your personal computer!

Reproducible research and reusable acyclic workflows in Python. Execute code on HPC systems as if you executed them on your machine! Motivation Would

Joeri Hermans 15 Sep 11, 2022
This repo holds codes of the ICCV21 paper: Visual Alignment Constraint for Continuous Sign Language Recognition.

VAC_CSLR This repo holds codes of the paper: Visual Alignment Constraint for Continuous Sign Language Recognition.(ICCV 2021) [paper] Prerequisites Th

Yuecong Min 64 Dec 19, 2022
Unofficial implementation of One-Shot Free-View Neural Talking Head Synthesis

face-vid2vid Usage Dataset Preparation cd datasets wget https://yt-dl.org/downloads/latest/youtube-dl -O youtube-dl chmod a+rx youtube-dl python load_

worstcoder 68 Dec 30, 2022
Code for classifying international patents based on the text of their titles/abstracts

Patent Classification Goal: To train a machine learning classifier that can automatically classify international patents downloaded from the WIPO webs

Prashanth Rao 1 Nov 08, 2022
The Official Repository for "Generalized OOD Detection: A Survey"

Generalized Out-of-Distribution Detection: A Survey 1. Overview This repository is with our survey paper: Title: Generalized Out-of-Distribution Detec

Jingkang Yang 338 Jan 03, 2023
A Model for Natural Language Attack on Text Classification and Inference

TextFooler A Model for Natural Language Attack on Text Classification and Inference This is the source code for the paper: Jin, Di, et al. "Is BERT Re

Di Jin 418 Dec 16, 2022
This is the code of "Multi-view Contrastive Graph Clustering" in NeurlPS 2021.

MCGC Description This is the code of "Multi-view Contrastive Graph Clustering" in NeurlPS 2021. Datasets Results ACM DBLP IMDB Amazon photos Amazon co

31 Nov 14, 2022
TeachMyAgent is a testbed platform for Automatic Curriculum Learning methods in Deep RL.

TeachMyAgent: a Benchmark for Automatic Curriculum Learning in Deep RL Paper Website Documentation TeachMyAgent is a testbed platform for Automatic Cu

Flowers Team 51 Dec 25, 2022
PyTorch implementation of ''Background Activation Suppression for Weakly Supervised Object Localization''.

Background Activation Suppression for Weakly Supervised Object Localization PyTorch implementation of ''Background Activation Suppression for Weakly S

35 Jan 06, 2023
Pytorch code for our paper "Feedback Network for Image Super-Resolution" (CVPR2019)

Feedback Network for Image Super-Resolution [arXiv] [CVF] [Poster] Update: Our proposed Gated Multiple Feedback Network (GMFN) will appear in BMVC2019

Zhen Li 539 Jan 06, 2023