A Pytorch implementation of CVPR 2021 paper "RSG: A Simple but Effective Module for Learning Imbalanced Datasets"

Related tags

Deep LearningRSG
Overview

RSG: A Simple but Effective Module for Learning Imbalanced Datasets (CVPR 2021)

A Pytorch implementation of our CVPR 2021 paper "RSG: A Simple but Effective Module for Learning Imbalanced Datasets". RSG (Rare-class Sample Generator) is a flexible module that can generate rare-class samples during training and can be combined with any backbone network. RSG is only used in the training phase, so it will not bring additional burdens to the backbone network in the testing phase.

How to use RSG in your own networks

  1. Initialize RSG module:

    from RSG import *
    
    # n_center: The number of centers, e.g., 15.
    # feature_maps_shape: The shape of input feature maps (channel, width, height), e.g., [32, 16, 16].
    # num_classes: The number of classes, e.g., 10.
    # contrastive_module_dim: The dimention of the contrastive module, e.g., 256.
    # head_class_lists: The index of head classes, e.g., [0, 1, 2].
    # transfer_strength: Transfer strength, e.g., 1.0.
    # epoch_thresh: The epoch index when rare-class samples are generated: e.g., 159.
    
    self.RSG = RSG(n_center = 15, feature_maps_shape = [32, 16, 16], num_classes=10, contrastive_module_dim = 256, head_class_lists = [0, 1, 2], transfer_strength = 1.0, epoch_thresh = 159)
    
    
  2. Use RSG in the forward pass during training:

    out = self.layer2(out)
    
    # feature_maps: The input feature maps.
    # head_class_lists: The index of head classes.
    # target: The label of samples.
    # epoch: The current index of epoch.
    
    if phase_train == True:
      out, cesc_total, loss_mv_total, combine_target = self.RSG.forward(feature_maps = out, head_class_lists = [0, 1, 2], target = target, epoch = epoch)
     
    out = self.layer3(out) 
    

The two loss terms, namely ''cesc_total'' and ''loss_mv_total'', will be returned and combined with cross-entropy loss for backpropagation. More examples and details can be found in the models in the directory ''Imbalanced_Classification/models''.

How to train

Some examples:

Go into the "Imbalanced_Classification" directory.

  1. To reimplement the result of ResNet-32 on long-tailed CIFAR-10 ($\rho$ = 100) with RSG and LDAM-DRW:

    Export CUDA_VISIBLE_DEVICES=0,1
    python cifar_train.py --imb_type exp --imb_factor 0.01 --loss_type LDAM --train_rule DRW
    
  2. To reimplement the result of ResNet-32 on step CIFAR-10 ($\rho$ = 50) with RSG and Focal loss:

    Export CUDA_VISIBLE_DEVICES=0,1
    python cifar_train.py --imb_type step --imb_factor 0.02 --loss_type Focal --train_rule None
    
  3. To run experiments on iNaturalist 2018, Places-LT, or ImageNet-LT:

    Firstly, please prepare datasets and their corresponding list files. For the convenience, we provide the list files in Google Drive and Baidu Disk.

    Google Drive Baidu Disk
    download download (code: q3dk)

    To train the model:

    python inaturalist_train.py
    

    or

    python places_train.py
    

    or

    python imagenet_lt_train.py
    

    As for Places-LT or ImageNet-LT, the model is trained on the training set, and the best model on the validation set will be saved for testing. The "places_test.py" and 'imagenet_lt_test.py' are used for testing.

Citation

@inproceedings{Jianfeng2021RSG,
  title = {RSG: A Simple but Effective Module for Learning Imbalanced Datasets},
  author = {Jianfeng Wang and Thomas Lukasiewicz and Xiaolin Hu and Jianfei Cai and Zhenghua Xu},
  booktitle={Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2021}
}
Tensorflow Implementation for "Pre-trained Deep Convolution Neural Network Model With Attention for Speech Emotion Recognition"

Tensorflow Implementation for "Pre-trained Deep Convolution Neural Network Model With Attention for Speech Emotion Recognition" Pre-trained Deep Convo

Ankush Malaker 5 Nov 11, 2022
Lightwood is Legos for Machine Learning.

Lightwood is like Legos for Machine Learning. A Pytorch based framework that breaks down machine learning problems into smaller blocks that can be glu

MindsDB Inc 312 Jan 08, 2023
Code of the paper "Multi-Task Meta-Learning Modification with Stochastic Approximation".

Multi-Task Meta-Learning Modification with Stochastic Approximation This repository contains the code for the paper "Multi-Task Meta-Learning Modifica

Andrew 3 Jan 05, 2022
Bayesian Optimization using GPflow

Note: This package is for use with GPFlow 1. For Bayesian optimization using GPFlow 2 please see Trieste, a joint effort with Secondmind. GPflowOpt GP

GPflow 257 Dec 26, 2022
Yolov5-lite - Minimal PyTorch implementation of YOLOv5

Yolov5-Lite: Minimal YOLOv5 + Deep Sort Overview This repo is a shortened versio

Kadir Nar 57 Nov 28, 2022
Intrinsic Image Harmonization

Intrinsic Image Harmonization [Paper] Zonghui Guo, Haiyong Zheng, Yufeng Jiang, Zhaorui Gu, Bing Zheng Here we provide PyTorch implementation and the

VISION @ OUC 44 Dec 21, 2022
Official Repository for the paper "Improving Baselines in the Wild".

iWildCam and FMoW baselines (WILDS) This repository was originally forked from the official repository of WILDS datasets (commit 7e103ed) For general

Kazuki Irie 3 Nov 24, 2022
🛰️ Awesome Satellite Imagery Datasets

Awesome Satellite Imagery Datasets List of aerial and satellite imagery datasets with annotations for computer vision and deep learning. Newest datase

Christoph Rieke 3k Jan 03, 2023
Object detection GUI based on PaddleDetection

PP-Tracking GUI界面测试版 本项目是基于飞桨开源的实时跟踪系统PP-Tracking开发的可视化界面 在PaddlePaddle中加入pyqt进行GUI页面研发,可使得整个训练过程可视化,并通过GUI界面进行调参,模型预测,视频输出等,通过多种类型的识别,简化整体预测流程。 GUI界面

杨毓栋 68 Jan 02, 2023
The hippynn python package - a modular library for atomistic machine learning with pytorch.

The hippynn python package - a modular library for atomistic machine learning with pytorch. We aim to provide a powerful library for the training of a

Los Alamos National Laboratory 37 Dec 29, 2022
RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition

RepMLP: Re-parameterizing Convolutions into Fully-connected Layers for Image Recognition (PyTorch) Paper: https://arxiv.org/abs/2105.01883 Citation: @

260 Jan 03, 2023
HCQ: Hybrid Contrastive Quantization for Efficient Cross-View Video Retrieval

HCQ: Hybrid Contrastive Quantization for Efficient Cross-View Video Retrieval [toc] 1. Introduction This repository provides the code for our paper at

13 Dec 08, 2022
OSLO: Open Source framework for Large-scale transformer Optimization

O S L O Open Source framework for Large-scale transformer Optimization What's New: December 21, 2021 Released OSLO 1.0. What is OSLO about? OSLO is a

TUNiB 280 Nov 24, 2022
Registration Loss Learning for Deep Probabilistic Point Set Registration

RLLReg This repository contains a Pytorch implementation of the point set registration method RLLReg. Details about the method can be found in the 3DV

Felix Järemo Lawin 35 Nov 02, 2022
For IBM Quantum Challenge Africa 2021, 9 September (07:00 UTC) - 20 September (23:00 UTC).

IBM Quantum Challenge Africa 2021 To ensure Africa is able to apply quantum computing to solve problems relevant to the continent, the IBM Research La

Qiskit Community 48 Dec 25, 2022
Generate fine-tuning samples & Fine-tuning the model & Generate samples by transferring Note On

UPMT Generate fine-tuning samples & Fine-tuning the model & Generate samples by transferring Note On See main.py as an example: from model import PopM

7 Sep 01, 2022
Character Controllers using Motion VAEs

Character Controllers using Motion VAEs This repo is the codebase for the SIGGRAPH 2020 paper with the title above. Please find the paper and demo at

Electronic Arts 165 Jan 03, 2023
The codebase for our paper "Generative Occupancy Fields for 3D Surface-Aware Image Synthesis" (NeurIPS 2021)

Generative Occupancy Fields for 3D Surface-Aware Image Synthesis (NeurIPS 2021) Project Page | Paper Xudong Xu, Xingang Pan, Dahua Lin and Bo Dai GOF

xuxudong 97 Nov 10, 2022
Axel - 3D printed robotic hands and they controll with Raspberry Pi and Arduino combo

Axel It's our graduation project about 3D printed robotic hands and they control

0 Feb 14, 2022
TF2 implementation of knowledge distillation using the "function matching" hypothesis from the paper Knowledge distillation: A good teacher is patient and consistent by Beyer et al.

FunMatch-Distillation TF2 implementation of knowledge distillation using the "function matching" hypothesis from the paper Knowledge distillation: A g

Sayak Paul 67 Dec 20, 2022