An end-to-end machine learning library to directly optimize AUC loss

Overview

LibAUC

An end-to-end machine learning library for AUC optimization.

Why LibAUC?

Deep AUC Maximization (DAM) is a paradigm for learning a deep neural network by maximizing the AUC score of the model on a dataset. There are several benefits of maximizing AUC score over minimizing the standard losses, e.g., cross-entropy.

  • In many domains, AUC score is the default metric for evaluating and comparing different methods. Directly maximizing AUC score can potentially lead to the largest improvement in the model’s performance.
  • Many real-world datasets are usually imbalanced . AUC is more suitable for handling imbalanced data distribution since maximizing AUC aims to rank the predication score of any positive data higher than any negative data

Links

Installation

$ pip install libauc

Usage

Official Tutorials:

  • 01.Creating Imbalanced Benchmark Datasets [Notebook][Script]
  • 02.Training ResNet20 with Imbalanced CIFAR10 [Notebook][Script]
  • 03.Training with Pytorch Learning Rate Scheduling [Notebook][Script]
  • 04.Training with Imbalanced Datasets on Distributed Setting [Coming soon]

Quickstart for beginner:

>>> #import library
>>> from libauc.losses import AUCMLoss
>>> from libauc.optimizers import PESG
...
>>> #define loss
>>> Loss = AUCMLoss(imratio=0.1)
>>> optimizer = PESG(imratio=0.1)
...
>>> #training
>>> model.train()    
>>> for data, targets in trainloader:
>>>	data, targets  = data.cuda(), targets.cuda()
        preds = model(data)
        loss = Loss(preds, targets) 
        optimizer.zero_grad()
        loss.backward(retain_graph=True)
        optimizer.step()
...	
>>> #restart stage
>>> optimizer.update_regularizer()		
...   
>>> #evaluation
>>> model.eval()    
>>> for data, targets in testloader:
	data, targets  = data.cuda(), targets.cuda()
        preds = model(data)

Please visit our website or github for more examples.

Citation

If you find LibAUC useful in your work, please cite the following paper:

@article{yuan2020robust,
title={Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification},
author={Yuan, Zhuoning and Yan, Yan and Sonka, Milan and Yang, Tianbao},
journal={arXiv preprint arXiv:2012.03173},
year={2020}
}

Contact

If you have any questions, please contact us @ Zhuoning Yuan [[email protected]] and Tianbao Yang [[email protected]] or please open a new issue in the Github.

Comments
  • Only compatible with Nvidia GPU

    Only compatible with Nvidia GPU

    I tried running the example tutorial but I got the following error. ''' AssertionError: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx '''

    opened by Beckham45 2
  • Extend to Multi-class Classification Task and Be compatible with PyTorch scheduler

    Extend to Multi-class Classification Task and Be compatible with PyTorch scheduler

    Hi Zhuoning,

    This is an interesting work! I am wondering if the DAM method can be extended to a multi-class classification task with long-tailed imbalanced data. Intuitively, this should be possible as the famous sklearn tool provides auc score for multi-class setting by using one-versus-rest or one-versus-one technique.

    Besides, it seems that optimizer.update_regularizer() is called only when the learning rate is reduced, thus it would be more elegant to incorporate this functional call into a pytorch lr scheduler. E.g.,

    scheduler = torch.optim.lr_scheduler.ReduceLROnPlateau(optimizer)
    scheduler.step()    # override the step to fulfill: optimizer.update_regularizer()
    
    

    For current libauc version, the PESG optimizer is not compatible with schedulers in torch.optim.lr_scheduler . It would be great if this feature can be supported in the future.

    Thanks for your work!

    opened by Cogito2012 2
  • When to use retain_graph=True?

    When to use retain_graph=True?

    Hi,

    When to use retain_graph=True in the loss backward function?

    In 2 examples (2 and 4), it is True. But not in the other examples.

    I appreciate your time.

    opened by dfrahman 1
  • Using AUCMLoss with imratio>1

    Using AUCMLoss with imratio>1

    I'm not very familiar with the maths in the paper so please forgive me if i'm asking something obvious.

    The AUCMLoss uses the "imbalance ratio" between positive and negative samples. The ratio is defined as

    the ratio of # of positive examples to the # of negative examples

    Or imratio=#pos/#neg

    When #pos<#neg, imratio is some value between 0 and 1. But when #pos>#neg, imratio>1

    Will this break the loss calculations? I have a feeling it would invalidate the many 1-self.p calculations in the LibAUC implementation, but as i'm not familiar with the maths I can't say for sure.

    Also, is there a problem (mathematically speaking) with calculating imratio=#pos/#total_samples, to avoid the problem above? When #pos<<#neg, #neg approximates #total_samples.

    opened by ayhyap 1
  • AUCMLoss does not use margin argument

    AUCMLoss does not use margin argument

    I noticed in the AUCMLoss class that the margin argument is not used. Following the formulation in the paper, the forward function should be changed in line 20 from 2*self.alpha*(self.p*(1-self.p) + \ to 2*self.alpha*(self.p*(1-self.p)*self.margin + \

    opened by ayhyap 1
  • How to train multi-label classification tasks? (like chexpert)

    How to train multi-label classification tasks? (like chexpert)

    I have started using this library and I've read your paper Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification, and I'm still not sure how to train a multi-label classification (MLC) model.

    Specifically, how did you fine-tune for the Chexpert multi-label classification task? (i.e. classify 5 diseases, where each image may have presence of 0, 1 or more diseases)

    • The first step pre-training with Cross-entropy loss seems clear to me
    • You mention: "In the second step of AUC maximization, we replace the last classifier layer trained in the first step by random weights and use our DAM method to optimize the last classifier layer and all previous layers.". The new classifier layer is a single or multi-label classifier?
    • In the Appendix I, figure 7 shows only one score as output for Deep AUC maximization (i.e. only one disease)
    • In the code, both AUCMLoss() and APLoss_SH() receive single-label outputs, not multi-label outputs, apparently

    How do you train for the 5 diseases? Train sequentially Cardiomegaly, then Edema, and so on? or with 5 losses added up? or something else?

    opened by pdpino 4
  • Example for tensorflow

    Example for tensorflow

    Thank you for the great library. Does it currently support tensorflow? If so, could you provide an example of how it can be used with tensorflow? Thank you very much

    opened by Kokkini 1
Releases(1.1.4)
  • 1.1.4(Jul 26, 2021)

    What's New

    • Added PyTorch dataloader for CheXpert dataset. Tutorial for training CheXpert is available here.
    • Added support for training AUC loss on CPU machines. Note that please remove lines with .cuda() from the code.
    • Fixed some bugs and improved the training stability
    Source code(tar.gz)
    Source code(zip)
  • 1.1.3(Jun 16, 2021)

  • 1.1.2(Jun 14, 2021)

    What's New

    1. Add SOAP optimizer contributed by @qiqi-helloworld @yzhuoning for optimizing AUPRC. Please check the tutorial here.
    2. Update ResNet18, ResNet34 with pretrained models on ImageNet1K
    3. Add new strategy for AUCM Loss: imratio is calculated over a mini-batch if initial value is not given
    4. Fixed some bugs and improved the training stability
    Source code(tar.gz)
    Source code(zip)
  • V1.1.0(May 10, 2021)

    What's New:

    • Fixed some bugs and improved the training stability
    • Changed default settings in loss function for binary labels to be 0 and 1
    • Added Pytorch dataloaders for CIFAR10, CIFAR100, CAT_vs_Dog, STL10
    • Enabled training DAM with Pytorch leanring scheduler, e.g., ReduceLROnPlateau, CosineAnnealingLR
    Source code(tar.gz)
    Source code(zip)
patchmatch和patchmatchstereo算法的python实现

patchmatch patchmatch以及patchmatchstereo算法的python版实现 patchmatch参考 github patchmatchstereo参考李迎松博士的c++版代码 由于patchmatchstereo没有做任何优化,并且是python的代码,主要是方便解析算

Sanders Bao 11 Dec 02, 2022
Pytorch implementation of Value Iteration Networks (NIPS 2016 best paper)

VIN: Value Iteration Networks A quick thank you A few others have released amazing related work which helped inspire and improve my own implementation

Kent Sommer 297 Dec 26, 2022
"Learning and Analyzing Generation Order for Undirected Sequence Models" in Findings of EMNLP, 2021

undirected-generation-dev This repo contains the source code of the models described in the following paper "Learning and Analyzing Generation Order f

Yichen Jiang 0 Mar 25, 2022
Official codebase for "B-Pref: Benchmarking Preference-BasedReinforcement Learning" contains scripts to reproduce experiments.

B-Pref Official codebase for B-Pref: Benchmarking Preference-BasedReinforcement Learning contains scripts to reproduce experiments. Install conda env

48 Dec 20, 2022
Official repository of the paper "A Variational Approximation for Analyzing the Dynamics of Panel Data". Mixed Effect Neural ODE. UAI 2021.

Official repository of the paper (UAI 2021) "A Variational Approximation for Analyzing the Dynamics of Panel Data", Mixed Effect Neural ODE. Panel dat

Jurijs Nazarovs 7 Nov 26, 2022
Codebase for the paper titled "Continual learning with local module selection"

This repository contains the codebase for the paper Continual Learning via Local Module Composition. Setting up the environemnt Create a new conda env

Oleksiy Ostapenko 20 Dec 10, 2022
3D detection and tracking viewer (visualization) for kitti & waymo dataset

3D detection and tracking viewer (visualization) for kitti & waymo dataset

222 Jan 08, 2023
PyTorch implementation of "MLP-Mixer: An all-MLP Architecture for Vision" Tolstikhin et al. (2021)

mlp-mixer-pytorch PyTorch implementation of "MLP-Mixer: An all-MLP Architecture for Vision" Tolstikhin et al. (2021) Usage import torch from mlp_mixer

isaac 27 Jul 09, 2022
[ICCV-2021] An Empirical Study of the Collapsing Problem in Semi-Supervised 2D Human Pose Estimation

An Empirical Study of the Collapsing Problem in Semi-Supervised 2D Human Pose Estimation (ICCV 2021) Introduction This is an official pytorch implemen

rongchangxie 42 Jan 04, 2023
Paper Title: Heterogeneous Knowledge Distillation for Simultaneous Infrared-Visible Image Fusion and Super-Resolution

HKDnet Paper Title: "Heterogeneous Knowledge Distillation for Simultaneous Infrared-Visible Image Fusion and Super-Resolution" Email:

wasteland 11 Nov 12, 2022
[CVPR 2021] 'Searching by Generating: Flexible and Efficient One-Shot NAS with Architecture Generator'

[CVPR2021] Searching by Generating: Flexible and Efficient One-Shot NAS with Architecture Generator Overview This is the entire codebase for the paper

35 Dec 01, 2022
NAS-FCOS: Fast Neural Architecture Search for Object Detection (CVPR 2020)

NAS-FCOS: Fast Neural Architecture Search for Object Detection This project hosts the train and inference code with pretrained model for implementing

Ning Wang 180 Dec 06, 2022
This repository is an unoffical PyTorch implementation of Medical segmentation in 3D and 2D.

Pytorch Medical Segmentation Read Chinese Introduction:Here! Recent Updates 2021.1.8 The train and test codes are released. 2021.2.6 A bug in dice was

EasyCV-Ellis 618 Dec 27, 2022
The official implementation of CVPR 2021 Paper: Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation.

Improving Weakly Supervised Visual Grounding by Contrastive Knowledge Distillation This repository is the official implementation of CVPR 2021 paper:

9 Nov 14, 2022
Improving Transferability of Representations via Augmentation-Aware Self-Supervision

Improving Transferability of Representations via Augmentation-Aware Self-Supervision Accepted to NeurIPS 2021 TL;DR: Learning augmentation-aware infor

hankook 38 Sep 16, 2022
Learning Spatio-Temporal Transformer for Visual Tracking

STARK The official implementation of the paper Learning Spatio-Temporal Transformer for Visual Tracking Hiring research interns for visual transformer

Multimedia Research 484 Dec 29, 2022
You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks.

AllSet This is the repo for our paper: You are AllSet: A Multiset Function Framework for Hypergraph Neural Networks. We prepared all codes and a subse

Jianhao 51 Dec 24, 2022
Mortgage-loan-prediction - Show how to perform advanced Analytics and Machine Learning in Python using a full complement of PyData utilities

Mortgage-loan-prediction - Show how to perform advanced Analytics and Machine Learning in Python using a full complement of PyData utilities

Deepak Nandwani 1 Dec 31, 2021
Self-supervised spatio-spectro-temporal represenation learning for EEG analysis

EEG-Oriented Self-Supervised Learning and Cluster-Aware Adaptation This repository provides a tensorflow implementation of a submitted paper: EEG-Orie

Wonjun Ko 4 Jun 09, 2022
A collection of metrics for evaluating timbre dissimilarity using the TorchMetrics API

Timbre Dissimilarity Metrics A collection of metrics for evaluating timbre dissimilarity using the TorchMetrics API Installation pip install -e . Usag

Ben Hayes 21 Jan 05, 2022