Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using coresets and data selection.

Overview


            

COResets and Data Subset selection

GitHub Decile Documentation GitHub Stars GitHub Forks

Reduce end to end training time from days to hours (or hours to minutes), and energy requirements/costs by an order of magnitude using coresets and data selection.

In this README

What is CORDS?

CORDS is COReset and Data Selection library for making machine learning time, energy, cost, and compute efficient. CORDS is built on top of pytorch. Deep Learning systems are extremely compute intensive today with large turn around times, energy inefficiencies, higher costs and resourse requirements [1,2]. CORDS is an effort to make deep learning more energy, cost, resource and time efficient while not sacrificing accuracy. The following are the goals CORDS tries to achieve:

Data Efficiency

Reducing End to End Training Time

Reducing Energy Requirement

Faster Hyper-parameter tuning

Reducing Resource (GPU) Requirement and Costs

The primary purpose of CORDS is to select the right representative data subsets from massive datasets, and it does so iteratively. CORDS uses some recent advances in data subset selection and particularly, ideas of coresets and submodularity select such subsets. CORDS implements a number of state of the art data subset selection algorithms and coreset algorithms. Some of the algorithms currently implemented with CORDS include:

We are continuously incorporating newer and better algorithms into CORDS. Some of the features of CORDS includes:

  • Reproducability of SOTA in Data Selection and Coresets: Enable easy reproducability of SOTA described above. We are trying to also add more algorithms so if you have an algorithm you would like us to include, please let us know,.
  • Benchmarking: We have benchmarked CORDS (and the algorithms present right now) on several datasets including CIFAR-10, CIFAR-100, MNIST, SVHN and ImageNet.
  • Ease of Use: One of the main goals of CORDS is that it is easy to use and add to CORDS. Feel free to contribute to CORDS!
  • Modular design: The data selection algorithms are separate from the training loop, thereby enabling modular design and also varied scenarios of utility.
  • Broad number of usecases: CORDS is currently implemented for simple image classification tasks and hyperparameter tuning, but we are working on integrating a number of additional use cases like object detection, speech recognition, semi-supervised learning, Auto-ML, etc.

Installation

  1. To install latest version of CORDS package using PyPI:

    pip install -i https://test.pypi.org/simple/ cords
  2. To install using source:

    git clone https://github.com/decile-team/cords.git
    cd cords
    pip install -r requirements/requirements.txt

Next Steps

Tutorials

Documentation

The documentation for the latest version of CORDS can always be found here.

Comments
  • Logistic Regression support for Gradmatch

    Logistic Regression support for Gradmatch

    Logistic Regression model throws errors when we do back propagation. The fix for this is perhaps making freeze=False in forward function of utils/models/logreg_net.py

    opened by nlokeshiisc 4
  • [Bug] Got weight with same value when running examples.

    [Bug] Got weight with same value when running examples.

    Hi, I tested the example with Supervised learning and Glister strategy. https://github.com/decile-team/cords/blob/main/examples/SL/image_classification/python_notebooks/CORDS_SL_CIFAR10_Custom_Train.ipynb But when I print the weight of the train loader, they are all 1.0. I believe that by using Glister strategy, we will get different weights.

    tensor([1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1., 1.,
            1., 1.], device='cuda:0')
    

    Is that a bug or something special? Thanks.

    opened by HaoKang-Timmy 3
  • Segmentation fault (core dumped)

    Segmentation fault (core dumped)

    Hi,

    I was trying to deploy CORDS selection to my training, but this error popped out Segmentation fault (core dumped).

    I imitated code from https://github.com/decile-team/cords/blob/main/examples/SL/image_classification/python_notebooks/CORDS_SL_CIFAR10_Custom_Train.ipynb.

    So basically I put my training and testing loader into GLISTERDataLoader, and switched this part into my code

    for _, (inputs, targets, weights) in enumerate(dataloader): inputs = inputs.to(device) targets = targets.to(device, non_blocking=True) weights = weights.to(device) optimizer.zero_grad() outputs = model(inputs) losses = criterion_nored(outputs, targets) loss = torch.dot(losses, weights/(weights.sum())) loss.backward()

    before modifying my code was running fine, so I believe there is an error inside the CORDS, my dataset is CIFAR10.

    Thanks

    opened by chengwuxinlin 2
  • Replace apricot with submodlib

    Replace apricot with submodlib

    Fixes #16
    submodlib is now used for the CRAIG strategy/dataloader as well as the submodular strategy/dataloader. Please let me know if you have any feedback!

    Notes:

    • I am not sure if sum redundancy (a submodular function implemented in apricot) has an analogue in submodlib, so it is disabled as an option for now.
    • It doesn't seem like submodularselectionstrategy.py is used in the corresponding dataloader. This may be a good opportunity to refactor, so that behavior is consistent between the two.
    • Any existing code that specifies the "optimizer" (greedy algorithm) used by apricot will break, since the names used by submodlib are different than those used by apricot (e.g. 'LazyGreedy' instead of 'Lazy'). This includes configs that use this option.
    opened by ghost 1
  • Typo in cords_cifar10_glister_train.ipynb

    Typo in cords_cifar10_glister_train.ipynb

    There is a typo in the cords_cifar10_glister_train.ipynb notebook : https://github.com/decile-team/cords/blob/main/examples/SL/image_classification/cords_cifar10_glister_train.ipynb

    glister_trn.configdata.train_args.print_every = 1
    glister_trn.configdata.train_args.device = 'cuda'
    glister_trn.configdata.dss_args.fraction = fraction
    

    instead of

    glister_trn.cfg.train_args.print_every = 1
    glister_trn.cfg.train_args.device = 'cuda'
    glister_trn.cfg.dss_args.fraction = fraction
    
    opened by eendee 1
  • Evaluation on ImageNet

    Evaluation on ImageNet

    Hello, thanks for a very interesting and useful project.

    Could you mind providing an evaluation method for ImageNet? I tried to, adding loader for ImageNet to custom_dataset.py, but failed due to a GPU memory issue during subset selection.

    Many thanks!

    opened by Hayoung93 1
  • For GRAD_MATCH method, the weights associated with each data point in X(subset of training set)

    For GRAD_MATCH method, the weights associated with each data point in X(subset of training set)

    1. For GRAD-MATCH method, there are weights associated with each data point in X(subset of training set). Do the weights have physical significance? for example, if the value of the weight is higher, the relevant selected data has the greater contribution to the residual?
    2. During the iteration, the selective index is in the selected indices, so the iteration break. why this happen? [email protected]
    opened by lishaguo 1
  • Questions about accuracy logging

    Questions about accuracy logging

    Hello! Thanks for your great work.

    I'm currently working on this code and I want to ask a question about accuracy logging.

    https://github.com/decile-team/cords/blob/ff629ff15fac911cd3b82394ffd278c42dacd874/train.py#L530-L541

    In line 541 of train.py, val_acc contains cumulative accuracies over input batches. For example, if the loader contains 4500 examples and the batch size is 1000, then tst_acc has 5 accuracies per each evaluation. (the first element of tst_acc will be the accuracy over the first 1000 examples)

    https://github.com/decile-team/cords/blob/ff629ff15fac911cd3b82394ffd278c42dacd874/train.py#L631-L633

    In line 633, it prints the best value in tst_acc. In this case, the resulted best accuracies over different algorithms and seeds might be the values evaluated on different test samples.

    Is this what you intended? In my experience, I think evaluating algorithms on an identical test dataset is a convention. In addition, is the reported test accuracies in the GRAD-MATCH paper the best values as above or the last test accuracy?

    Best, Jang-Hyun

    opened by Janghyun1230 1
  • CORDS gradient calculations for different loss functions

    CORDS gradient calculations for different loss functions

    a) Implement gradient calculation for Squared Loss, Negative logistic loss, General loss function gradient computation, Hinge loss.

    b) Integrate the new gradient calculation with different selection strategies

    enhancement 
    opened by krishnatejakk 1
  • Refactor the folders in the repo

    Refactor the folders in the repo

    • Add a folder called benchmarks which has all the results/benchmarks for the various cases. We should remove the results from the main readme and point them to that folder. Also, add the notebooks to reproduce the benchmark results
    • Rename notebooks to tutorials. Add different tutorials based on use-cases (NLP, Vision, SSL, Hyper-parameter tunings, NAS, etc.)
    opened by rishabhk108 0
  • Inquiry about performance of gradmatch

    Inquiry about performance of gradmatch

    Hello, I ran some experiments with gradmatch and randomonline, and find these two actually reach similar performances after 300 epochs, which is around 93, is there something important to note for reproducing the results? Thanks for your help!

    opened by pipilurj 0
  • Implement faster version of OMP

    Implement faster version of OMP

    Implement the following versions of OMP:

    1. FNNOMP (https://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7012095)
    2. SNNOMP (https://hal.univ-lorraine.fr/hal-01585253/document)
    high priority in progress 
    opened by krishnatejakk 0
  • Gradmatch Data subset selection method making training slow

    Gradmatch Data subset selection method making training slow

    I tried to run some experiments as follows:

    • Ran full cifar10 without any subset selection method to train resnet50 which took around 32m 31s.
    • Ran Gradmatch cifar10 subset selection with 0.1 fractions taking longer time than full cifar10 i.e 22h 48m 40s.
    • Ran Gradmatch cifar10 subset selection with 0.3 fractions taking longer time than 0.1 Gradmatch selection method.

    I am using scaled resolution images of cifar10 i.e 224x224 resolution and accordingly defined resnet50 architecture. Can you let me know how to speed up experiments 2 and 3? In general subset selection method should faster the whole training process right?

    opened by animesh-007 9
  • Implement CRUST Algorithm

    Implement CRUST Algorithm

    1. Implement the CRUST strategy in the supervised learning setting.
    2. Create the CRUST data loader class building it on top of adaptive_dataloader class.
    enhancement 
    opened by krishnatejakk 0
Releases(v0.0.1)
  • v0.0.1(Mar 24, 2022)

    What's Changed

    • Selcon sahasra by @sahasrarjn in https://github.com/decile-team/cords/pull/73
    • Selcon sahasra by @sahasrarjn in https://github.com/decile-team/cords/pull/74

    New Contributors

    • @sahasrarjn made their first contribution in https://github.com/decile-team/cords/pull/73

    Full Changelog: https://github.com/decile-team/cords/compare/v0.0.0...v0.0.1

    Source code(tar.gz)
    Source code(zip)
  • v0.0.0(Mar 4, 2022)

    Pre-release of CORDS

    What's Changed

    • Dev by @krishnatejakk in https://github.com/decile-team/cords/pull/9
    • CONFIG Files Pull by @krishnatejakk in https://github.com/decile-team/cords/pull/10
    • New Gradient Computation Code by @krishnatejakk in https://github.com/decile-team/cords/pull/11
    • Feature: add support for hyperparameter tuning with subset selection by @savan77 in https://github.com/decile-team/cords/pull/12
    • Added checkpoints to save the model and updated documentation by @dheerajnbhat in https://github.com/decile-team/cords/pull/15
    • test CI and dual tests by @noilreed in https://github.com/decile-team/cords/pull/29
    • Dual CI flow merge to main by @noilreed in https://github.com/decile-team/cords/pull/30
    • Refactor/data loader by @krishnatejakk in https://github.com/decile-team/cords/pull/36
    • Refactor/data loader by @krishnatejakk in https://github.com/decile-team/cords/pull/40
    • Refactor/data loader by @krishnatejakk in https://github.com/decile-team/cords/pull/66

    New Contributors

    • @krishnatejakk made their first contribution in https://github.com/decile-team/cords/pull/9
    • @savan77 made their first contribution in https://github.com/decile-team/cords/pull/12
    • @dheerajnbhat made their first contribution in https://github.com/decile-team/cords/pull/15
    • @noilreed made their first contribution in https://github.com/decile-team/cords/pull/29

    Full Changelog: https://github.com/decile-team/cords/commits/v0.0.0

    Source code(tar.gz)
    Source code(zip)
Owner
decile-team
DECILE: Data EffiCient machIne LEarning
decile-team
Semantic Segmentation for Aerial Imagery using Convolutional Neural Network

This repo has been deprecated because whole things are re-implemented by using Chainer and I did refactoring for many codes. So please check this newe

Shunta Saito 27 Sep 23, 2022
🐦 Quickly annotate data from the comfort of your Jupyter notebook

🐦 pigeon - Quickly annotate data on Jupyter Pigeon is a simple widget that lets you quickly annotate a dataset of unlabeled examples from the comfort

Anastasis Germanidis 647 Jan 05, 2023
Trying to understand alias-free-gan.

alias-free-gan-explanation Trying to understand alias-free-gan in my own way. [Chinese Version 中文版本] CC-BY-4.0 License. Tzu-Heng Lin motivation of thi

Tzu-Heng Lin 12 Mar 17, 2022
GUPNet - Geometry Uncertainty Projection Network for Monocular 3D Object Detection

GUPNet This is the official implementation of "Geometry Uncertainty Projection Network for Monocular 3D Object Detection". citation If you find our wo

Yan Lu 103 Dec 28, 2022
Code for the CVPR2021 paper "Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition"

Patch-NetVLAD: Multi-Scale Fusion of Locally-Global Descriptors for Place Recognition This repository contains code for the CVPR2021 paper "Patch-NetV

QVPR 368 Jan 06, 2023
🚀 PyTorch Implementation of "Progressive Distillation for Fast Sampling of Diffusion Models(v-diffusion)"

PyTorch Implementation of "Progressive Distillation for Fast Sampling of Diffusion Models(v-diffusion)" Unofficial PyTorch Implementation of Progressi

Vitaliy Hramchenko 58 Dec 19, 2022
This repository contains a toolkit for collecting, labeling and tracking object keypoints

This repository contains a toolkit for collecting, labeling and tracking object keypoints. Object keypoints are semantic points in an object's coordinate frame.

ETHZ ASL 13 Dec 12, 2022
Hydra Lightning Template for Structured Configs

Hydra Lightning Template for Structured Configs Template for creating projects with pytorch-lightning and hydra. How to use this template? Create your

Model-driven Machine Learning 4 Jul 19, 2022
Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch

DALL-E in Pytorch Implementation / replication of DALL-E, OpenAI's Text to Image Transformer, in Pytorch. It will also contain CLIP for ranking the ge

Phil Wang 5k Jan 04, 2023
CNN Based Meta-Learning for Noisy Image Classification and Template Matching

CNN Based Meta-Learning for Noisy Image Classification and Template Matching Introduction This master thesis used a few-shot meta learning approach to

Kumar Manas 2 Dec 09, 2021
A Library for Modelling Probabilistic Hierarchical Graphical Models in PyTorch

A Library for Modelling Probabilistic Hierarchical Graphical Models in PyTorch

Korbinian Pöppel 47 Nov 28, 2022
Leaf: Multiple-Choice Question Generation

Leaf: Multiple-Choice Question Generation Easy to use and understand multiple-choice question generation algorithm using T5 Transformers. The applicat

Kristiyan Vachev 62 Dec 20, 2022
Implementation of our paper "Video Playback Rate Perception for Self-supervised Spatio-Temporal Representation Learning".

PRP Introduction This is the implementation of our paper "Video Playback Rate Perception for Self-supervised Spatio-Temporal Representation Learning".

yuanyao366 39 Dec 29, 2022
Adversarial examples to the new ConvNeXt architecture

Adversarial examples to the new ConvNeXt architecture To get adversarial examples to the ConvNeXt architecture, run the Colab: https://github.com/stan

Stanislav Fort 19 Sep 18, 2022
Code and datasets for TPAMI 2021

SkeletonNet This repository constains the codes and ShapeNetV1-Surface-Skeleton,ShapNetV1-SkeletalVolume and 2d image datasets ShapeNetRendering. Plea

34 Aug 15, 2022
[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

[NeurIPS'21] "AugMax: Adversarial Composition of Random Augmentations for Robust Training" by Haotao Wang, Chaowei Xiao, Jean Kossaifi, Zhiding Yu, Animashree Anandkumar, and Zhangyang Wang.

VITA 112 Nov 07, 2022
This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.

This is a Pytorch implementation of the paper: Self-Supervised Graph Transformer on Large-Scale Molecular Data.

212 Dec 25, 2022
This is the official implementation for the paper "Heterogeneous Multi-player Multi-armed Bandits: Closing the Gap and Generalization" in NeurIPS 2021.

MPMAB_BEACON This is code used for the paper "Decentralized Multi-player Multi-armed Bandits: Beyond Linear Reward Functions", Neurips 2021. Requireme

Cong Shen Research Group 0 Oct 26, 2021
Retinal vessel segmentation based on GT-UNet

Retinal vessel segmentation based on GT-UNet Introduction This project is a retinal blood vessel segmentation code based on UNet-like Group Transforme

Kent0n 27 Dec 18, 2022
code and models for "Laplacian Pyramid Reconstruction and Refinement for Semantic Segmentation"

Laplacian Pyramid Reconstruction and Refinement for Semantic Segmentation This repository contains code and models for the method described in: Golnaz

55 Jun 18, 2022