Boosted CVaR Classification (NeurIPS 2021)

Overview

Boosted CVaR Classification

Runtian Zhai, Chen Dan, Arun Sai Suggala, Zico Kolter, Pradeep Ravikumar
NeurIPS 2021

Table of Contents

Quick Start

Before running the code, please install all the required packages in requirements.txt by running:

pip install -r requirements.txt

In the code, we solve linear programs with the MOSEK solver, which requires a license. You can acquire a free academic license from https://www.mosek.com/products/academic-licenses/. Please make sure that the license file is placed in the correct folder so that the solver could work.

Train

To train a set of base models with boosting, run the following shell command:

python train.py --dataset [DATASET] --data_root /path/to/dataset 
                --alg [ALGORITHM] --epochs [EPOCHS] --iters_per_epoch [ITERS]
                --scheduler [SCHEDULER] --warmup [WARMUP_EPOCHS] --seed [SEED]

Use the --download option to download the dataset if you are running for the first time. Use the --save_file option to save your training results into a .mat file. Set the training hyperparameters with --alpha, --beta and --eta.

For example, to train a set of base models on Cifar-10 with AdaLPBoost, use the following shell command:

python train.py --dataset cifar10 --data_root data --alg adalpboost 
                --eta 1.0 --epochs 100 --iters_per_epoch 5000
                --scheduler 2000,4000 --warmup 20 --seed 2021
                --save_file cifar10.mat

Evaluation

To evaluate the models trained with the above command, run:

python test.py --file cifar10.mat

Introduction

In this work, we study the CVaR classification problem, which requires a classifier to have low α-CVaR loss, i.e. low average loss over the worst α fraction of the samples in the dataset. While previous work showed that no deterministic model learning algorithm can achieve a lower α-CVaR loss than ERM, we address this issue by learning randomized models. Specifically we propose the Boosted CVaR Classification framework that learns ensemble models via Boosting. Our motivation comes from the direct relationship between the CVaR loss and the LPBoost objective. We implement two algorithms based on the framework: one uses LPBoost, and the other named AdaLPBoost uses AdaBoost to pick the sample weights and LPBoost to pick the model weights.

Algorithms

We implement three algorithms in algs.py:

Name Description
uniform All sample weight vectors are uniform distributions.
lpboost Regularized LPBoost (set --beta for regularization).
adalpboost α-AdaLPBoost.

train.py only trains the base models. After the base models are trained, use test.py to select the model weights by solving the dual LPBoost problem.

Parameters

All default training parameters can be found in config.py. For Regularized LPBoost we use β = 100 for all α. For AdaLPBoost we use η = 1.0.

Citation and Contact

To cite this work, please use the following BibTex entry:

@inproceedings{zhai2021boosted,
  author = {Zhai, Runtian and Dan, Chen and Suggala, Arun Sai and Kolter, Zico and Ravikumar, Pradeep},
  booktitle = {Advances in Neural Information Processing Systems},
  title = {Boosted CVaR Classification},
  volume = {34},
  year = {2021}
}

To contact us, please email to the following address: Runtian Zhai <[email protected]>

Owner
Runtian Zhai
2nd year PhD at CMU CSD.
Runtian Zhai
Official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Recognition" in AAAI2022.

AimCLR This is an official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Reco

Gty 44 Dec 17, 2022
Code for paper "Multi-level Disentanglement Graph Neural Network"

Multi-level Disentanglement Graph Neural Network (MD-GNN) This is a PyTorch implementation of the MD-GNN, and the code includes the following modules:

Lirong Wu 6 Dec 29, 2022
🧠 A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation.', ECCV 2016

Deep CORAL A PyTorch implementation of 'Deep CORAL: Correlation Alignment for Deep Domain Adaptation. B Sun, K Saenko, ECCV 2016' Deep CORAL can learn

Andy Hsu 200 Dec 25, 2022
f-BRS: Rethinking Backpropagating Refinement for Interactive Segmentation

f-BRS: Rethinking Backpropagating Refinement for Interactive Segmentation [Paper] [PyTorch] [MXNet] [Video] This repository provides code for training

Visual Understanding Lab @ Samsung AI Center Moscow 516 Dec 21, 2022
Predicting Student Attentiveness using OpenCV

Predicting-Student-Attentiveness-using-OpenCV The model will predict if a student is attentive or not through facial parameter received through the st

Johann Pinto 2 Aug 20, 2022
Image super-resolution through deep learning

srez Image super-resolution through deep learning. This project uses deep learning to upscale 16x16 images by a 4x factor. The resulting 64x64 images

David Garcia 5.3k Dec 28, 2022
[NeurIPS'21] Projected GANs Converge Faster

[Project] [PDF] [Supplementary] [Talk] This repository contains the code for our NeurIPS 2021 paper "Projected GANs Converge Faster" by Axel Sauer, Ka

798 Jan 04, 2023
DNA-RECON { Automatic Web Reconnaissance Tool }

ABOUT TOOL : DNA-RECON is an automatic web reconnaissance tool written in python. This tool made for reconnaissance and information gathering with an

NIKUNJ BHATT 25 Aug 11, 2021
This is the repository for our paper Ditch the Gold Standard: Re-evaluating Conversational Question Answering

Ditch the Gold Standard: Re-evaluating Conversational Question Answering This is the repository for our paper Ditch the Gold Standard: Re-evaluating C

Princeton Natural Language Processing 38 Dec 16, 2022
Ranking Models in Unlabeled New Environments (iccv21)

Ranking Models in Unlabeled New Environments Prerequisites This code uses the following libraries Python 3.7 NumPy PyTorch 1.7.0 + torchivision 0.8.1

14 Dec 17, 2021
免费获取http代理并生成proxifier配置文件

freeproxy 免费获取http代理并生成proxifier配置文件 公众号:台下言书 工具说明:https://mp.weixin.qq.com/s?__biz=MzIyNDkwNjQ5Ng==&mid=2247484425&idx=1&sn=56ccbe130822aa35038095317

说书人 32 Mar 25, 2022
Personal thermal comfort models using digital twins: Preference prediction with BIM-extracted spatial-temporal proximity data from Build2Vec

Personal thermal comfort models using digital twins: Preference prediction with BIM-extracted spatial-temporal proximity data from Build2Vec This repo

Building and Urban Data Science (BUDS) Group 5 Dec 02, 2022
Joint Learning of 3D Shape Retrieval and Deformation, CVPR 2021

Joint Learning of 3D Shape Retrieval and Deformation Joint Learning of 3D Shape Retrieval and Deformation Mikaela Angelina Uy, Vladimir G. Kim, Minhyu

Mikaela Uy 38 Oct 18, 2022
Code for "Multi-View Multi-Person 3D Pose Estimation with Plane Sweep Stereo"

Multi-View Multi-Person 3D Pose Estimation with Plane Sweep Stereo This repository includes the source code for our CVPR 2021 paper on multi-view mult

Jiahao Lin 66 Jan 04, 2023
code for our ECCV-2020 paper: Self-supervised Video Representation Learning by Pace Prediction

Video_Pace This repository contains the code for the following paper: Jiangliu Wang, Jianbo Jiao and Yunhui Liu, "Self-Supervised Video Representation

Jiangliu Wang 95 Dec 14, 2022
Evolution Strategies in PyTorch

Evolution Strategies This is a PyTorch implementation of Evolution Strategies. Requirements Python 3.5, PyTorch = 0.2.0, numpy, gym, universe, cv2 Wh

Andrew Gambardella 333 Nov 14, 2022
[CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting

[CVPR 2020] 3D Photography using Context-aware Layered Depth Inpainting [Paper] [Project Website] [Google Colab] We propose a method for converting a

Virginia Tech Vision and Learning Lab 6.2k Jan 01, 2023
Official Pytorch Implementation of 3DV2021 paper: SAFA: Structure Aware Face Animation.

SAFA: Structure Aware Face Animation (3DV2021) Official Pytorch Implementation of 3DV2021 paper: SAFA: Structure Aware Face Animation. Getting Started

QiulinW 122 Dec 23, 2022
Deep learning image registration library for PyTorch

TorchIR: Pytorch Image Registration TorchIR is a image registration library for deep learning image registration (DLIR). I have integrated several ide

Bob de Vos 40 Dec 16, 2022
Dynamic Bottleneck for Robust Self-Supervised Exploration

Dynamic Bottleneck Introduction This is a TensorFlow based implementation for our paper on "Dynamic Bottleneck for Robust Self-Supervised Exploration"

Bai Chenjia 4 Nov 14, 2022