Boosted CVaR Classification (NeurIPS 2021)

Overview

Boosted CVaR Classification

Runtian Zhai, Chen Dan, Arun Sai Suggala, Zico Kolter, Pradeep Ravikumar
NeurIPS 2021

Table of Contents

Quick Start

Before running the code, please install all the required packages in requirements.txt by running:

pip install -r requirements.txt

In the code, we solve linear programs with the MOSEK solver, which requires a license. You can acquire a free academic license from https://www.mosek.com/products/academic-licenses/. Please make sure that the license file is placed in the correct folder so that the solver could work.

Train

To train a set of base models with boosting, run the following shell command:

python train.py --dataset [DATASET] --data_root /path/to/dataset 
                --alg [ALGORITHM] --epochs [EPOCHS] --iters_per_epoch [ITERS]
                --scheduler [SCHEDULER] --warmup [WARMUP_EPOCHS] --seed [SEED]

Use the --download option to download the dataset if you are running for the first time. Use the --save_file option to save your training results into a .mat file. Set the training hyperparameters with --alpha, --beta and --eta.

For example, to train a set of base models on Cifar-10 with AdaLPBoost, use the following shell command:

python train.py --dataset cifar10 --data_root data --alg adalpboost 
                --eta 1.0 --epochs 100 --iters_per_epoch 5000
                --scheduler 2000,4000 --warmup 20 --seed 2021
                --save_file cifar10.mat

Evaluation

To evaluate the models trained with the above command, run:

python test.py --file cifar10.mat

Introduction

In this work, we study the CVaR classification problem, which requires a classifier to have low α-CVaR loss, i.e. low average loss over the worst α fraction of the samples in the dataset. While previous work showed that no deterministic model learning algorithm can achieve a lower α-CVaR loss than ERM, we address this issue by learning randomized models. Specifically we propose the Boosted CVaR Classification framework that learns ensemble models via Boosting. Our motivation comes from the direct relationship between the CVaR loss and the LPBoost objective. We implement two algorithms based on the framework: one uses LPBoost, and the other named AdaLPBoost uses AdaBoost to pick the sample weights and LPBoost to pick the model weights.

Algorithms

We implement three algorithms in algs.py:

Name Description
uniform All sample weight vectors are uniform distributions.
lpboost Regularized LPBoost (set --beta for regularization).
adalpboost α-AdaLPBoost.

train.py only trains the base models. After the base models are trained, use test.py to select the model weights by solving the dual LPBoost problem.

Parameters

All default training parameters can be found in config.py. For Regularized LPBoost we use β = 100 for all α. For AdaLPBoost we use η = 1.0.

Citation and Contact

To cite this work, please use the following BibTex entry:

@inproceedings{zhai2021boosted,
  author = {Zhai, Runtian and Dan, Chen and Suggala, Arun Sai and Kolter, Zico and Ravikumar, Pradeep},
  booktitle = {Advances in Neural Information Processing Systems},
  title = {Boosted CVaR Classification},
  volume = {34},
  year = {2021}
}

To contact us, please email to the following address: Runtian Zhai <[email protected]>

Owner
Runtian Zhai
2nd year PhD at CMU CSD.
Runtian Zhai
Implementation of Monocular Direct Sparse Localization in a Prior 3D Surfel Map (DSL)

DSL Project page: https://sites.google.com/view/dsl-ram-lab/ Monocular Direct Sparse Localization in a Prior 3D Surfel Map Authors: Haoyang Ye, Huaiya

Haoyang Ye 93 Nov 30, 2022
Implementation of Invariant Point Attention, used for coordinate refinement in the structure module of Alphafold2, as a standalone Pytorch module

Invariant Point Attention - Pytorch Implementation of Invariant Point Attention as a standalone module, which was used in the structure module of Alph

Phil Wang 113 Jan 05, 2023
This is a repository with the code for the ACL 2019 paper

The Story of Heads This is the official repo for the following papers: (ACL 2019) Analyzing Multi-Head Self-Attention: Specialized Heads Do the Heavy

231 Nov 15, 2022
Self-Supervised Learning

Self-Supervised Learning Features self_supervised offers features like modular framework support for multi-gpu training using PyTorch Lightning easy t

Robin 1 Dec 14, 2021
A PyTorch Implementation of "Watch Your Step: Learning Node Embeddings via Graph Attention" (NeurIPS 2018).

Attention Walk ⠀⠀ A PyTorch Implementation of Watch Your Step: Learning Node Embeddings via Graph Attention (NIPS 2018). Abstract Graph embedding meth

Benedek Rozemberczki 303 Dec 09, 2022
This tutorial repository is to introduce the functionality of KGTK to first-time users

Welcome to the KGTK notebook tutorial The goal of this tutorial repository is to introduce the functionality of KGTK to first-time users. The Knowledg

USC ISI I2 58 Dec 21, 2022
Python and Julia in harmony.

PythonCall & JuliaCall Bringing Python® and Julia together in seamless harmony: Call Python code from Julia and Julia code from Python via a symmetric

Christopher Rowley 414 Jan 07, 2023
2021 National Underwater Robotics Vision Optics

2021-National-Underwater-Robotics-Vision-Optics 2021年全国水下机器人算法大赛-光学赛道-B榜精度第18名 (Kilian_Di的团队:A榜[email pro

Di Chang 9 Nov 04, 2022
This repository provides code for "On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness".

On Interaction Between Augmentations and Corruptions in Natural Corruption Robustness This repository provides the code for the paper On Interaction B

Meta Research 33 Dec 08, 2022
This repository contains all the code and materials distributed in the 2021 Q-Programming Summer of Qode.

Q-Programming Summer of Qode This repository contains all the code and materials distributed in the Q-Programming Summer of Qode. If you want to creat

Sammarth Kumar 11 Jun 11, 2021
DI-HPC is an acceleration operator component for general algorithm modules in reinforcement learning algorithms

DI-HPC: Decision Intelligence - High Performance Computation DI-HPC is an acceleration operator component for general algorithm modules in reinforceme

OpenDILab 185 Dec 29, 2022
PatrickStar enables Larger, Faster, Greener Pretrained Models for NLP. Democratize AI for everyone.

PatrickStar: Parallel Training of Large Language Models via a Chunk-based Memory Management Meeting PatrickStar Pre-Trained Models (PTM) are becoming

Tencent 633 Dec 28, 2022
GAN example for Keras. Cuz MNIST is too small and there should be something more realistic.

Keras-GAN-Animeface-Character GAN example for Keras. Cuz MNIST is too small and there should an example on something more realistic. Some results Trai

160 Sep 20, 2022
Detecting and Tracking Small and Dense Moving Objects in Satellite Videos: A Benchmark

This dataset is a large-scale dataset for moving object detection and tracking in satellite videos, which consists of 40 satellite videos captured by Jilin-1 satellite platforms.

Qingyong 87 Dec 22, 2022
Code for "Long Range Probabilistic Forecasting in Time-Series using High Order Statistics"

Long Range Probabilistic Forecasting in Time-Series using High Order Statistics This is the code produced as part of the paper Long Range Probabilisti

16 Dec 06, 2022
TrackFormer: Multi-Object Tracking with Transformers

TrackFormer: Multi-Object Tracking with Transformers This repository provides the official implementation of the TrackFormer: Multi-Object Tracking wi

Tim Meinhardt 321 Dec 29, 2022
This program presents convolutional kernel density estimation, a method used to detect intercritical epilpetic spikes (IEDs)

Description This program presents convolutional kernel density estimation, a method used to detect intercritical epilpetic spikes (IEDs) in [Gardy et

Ludovic Gardy 0 Feb 09, 2022
Trafffic prediction analysis using hybrid models - Machine Learning

Hybrid Machine learning Model Clone the Repository Create a new Directory as assests and download the model from the below link Model Link To Start th

1 Feb 08, 2022
OpenVINO黑客松比赛项目

Window_Guard OpenVINO黑客松比赛项目 英文名称:Window_Guard 中文名称:窗口卫士 硬件 树莓派4B 8G版本 一个磁石开关 USB摄像头(MP4视频文件也可以) 软件(库) OpenVINO RPi 使用方法 本项目使用的OPenVINO是是2021.3版本,并使用了

Tango 6 Jul 04, 2021
Spatial Contrastive Learning for Few-Shot Classification (SCL)

This repo contains the official implementation of Spatial Contrastive Learning for Few-Shot Classification (SCL), which presents of a novel contrastive learning method applied to few-shot image class

Yassine 34 Dec 25, 2022