High-level batteries-included neural network training library for Pytorch

Overview

Pywick

docs Downloads pypi python compatibility license

High-Level Training framework for Pytorch

Pywick is a high-level Pytorch training framework that aims to get you up and running quickly with state of the art neural networks. Does the world need another Pytorch framework? Probably not. But we started this project when no good frameworks were available and it just kept growing. So here we are.

Pywick tries to stay on the bleeding edge of research into neural networks. If you just wish to run a vanilla CNN, this is probably going to be overkill. However, if you want to get lost in the world of neural networks, fine-tuning and hyperparameter optimization for months on end then this is probably the right place for you :)

Among other things Pywick includes:

  • State of the art normalization, activation, loss functions and optimizers not included in the standard Pytorch library (Addsign, Eve, Lookahead, Radam, Ralamb, RangerLARS etc).
  • A high-level module for training with callbacks, constraints, metrics, conditions and regularizers.
  • Dozens of popular object classification and semantic segmentation models.
  • Comprehensive data loading, augmentation, transforms, and sampling capability.
  • Utility tensor functions.
  • Useful meters.
  • Basic GridSearch (exhaustive and random).

Docs

Hey, check this out, we now have docs! They're still a work in progress though so apologies for anything that's broken.

What's New (highlights)

  • Jun. 15, 2020
    • 200+ models added from rwightman's repo via torch.hub! See docs for all the variants!
    • Some minor bug fixes
  • Jan. 20, 2020
    • New release: 0.5.6 (minor fix from 0.5.5 for pypi)
    • Mish activation function (SoTA)
    • rwightman's models of pretrained/ported variants for classification (44 total)
      • efficientnet Tensorflow port b0-b8, with and without AP, el/em/es, cc
      • mixnet L/M/S
      • mobilenetv3
      • mnasnet
      • spnasnet
    • Additional loss functions
  • Aug. 1, 2019
    • New segmentation NNs: BiSeNet, DANet, DenseASPP, DUNet, OCNet, PSANet
    • New Loss Functions: Focal Tversky Loss, OHEM CrossEntropy Loss, various combination losses
    • Major restructuring and standardization of NN models and loading functionality
    • General bug fixes and code improvements

Install

Pywick requires pytorch >= 1.0

pip install pywick

or specific version from git:

pip install git+https://github.com/achaiah/[email protected]

ModuleTrainer

The ModuleTrainer class provides a high-level training interface which abstracts away the training loop while providing callbacks, constraints, initializers, regularizers, and more.

Example:

from pywick.modules import ModuleTrainer
from pywick.initializers import XavierUniform
from pywick.metrics import CategoricalAccuracySingleInput
import torch.nn as nn
import torch.functional as F

# Define your model EXACTLY as normal
class Network(nn.Module):
    def __init__(self):
        super(Network, self).__init__()
        self.conv1 = nn.Conv2d(1, 32, kernel_size=3)
        self.conv2 = nn.Conv2d(32, 64, kernel_size=3)
        self.fc1 = nn.Linear(1600, 128)
        self.fc2 = nn.Linear(128, 10)

    def forward(self, x):
        x = F.relu(F.max_pool2d(self.conv1(x), 2))
        x = F.relu(F.max_pool2d(self.conv2(x), 2))
        x = x.view(-1, 1600)
        x = F.relu(self.fc1(x))
        x = F.dropout(x, training=self.training)
        x = self.fc2(x)
        return F.log_softmax(x)

model = Network()
trainer = ModuleTrainer(model)   # optionally supply cuda_devices as a parameter

initializers = [XavierUniform(bias=False, module_filter='fc*')]

# initialize metrics with top1 and top5 
metrics = [CategoricalAccuracySingleInput(top_k=1), CategoricalAccuracySingleInput(top_k=5)]

trainer.compile(loss='cross_entropy',
                # callbacks=callbacks,          # define your callbacks here (e.g. model saver, LR scheduler)
                # regularizers=regularizers,    # define regularizers
                # constraints=constraints,      # define constraints
                optimizer='sgd',
                initializers=initializers,
                metrics=metrics)

trainer.fit_loader(train_dataset_loader, 
            val_loader=val_dataset_loader,
            num_epoch=20,
            verbose=1)

You also have access to the standard evaluation and prediction functions:

loss = trainer.evaluate(x_train, y_train)
y_pred = trainer.predict(x_train)

PyWick provides a wide range of callbacks, generally mimicking the interface found in Keras:

  • CSVLogger - Logs epoch-level metrics to a CSV file
  • CyclicLRScheduler - Cycles through min-max learning rate
  • EarlyStopping - Provides ability to stop training early based on supplied criteria
  • History - Keeps history of metrics etc. during the learning process
  • LambdaCallback - Allows you to implement your own callbacks on the fly
  • LRScheduler - Simple learning rate scheduler based on function or supplied schedule
  • ModelCheckpoint - Comprehensive model saver
  • ReduceLROnPlateau - Reduces learning rate (LR) when a plateau has been reached
  • SimpleModelCheckpoint - Simple model saver
  • Additionally, a TensorboardLogger is incredibly easy to implement via the TensorboardX (now part of pytorch 1.1 release!)
from pywick.callbacks import EarlyStopping

callbacks = [EarlyStopping(monitor='val_loss', patience=5)]
trainer.set_callbacks(callbacks)

PyWick also provides regularizers:

  • L1Regularizer
  • L2Regularizer
  • L1L2Regularizer

and constraints:

  • UnitNorm
  • MaxNorm
  • NonNeg

Both regularizers and constraints can be selectively applied on layers using regular expressions and the module_filter argument. Constraints can be explicit (hard) constraints applied at an arbitrary batch or epoch frequency, or they can be implicit (soft) constraints similar to regularizers where the the constraint deviation is added as a penalty to the total model loss.

from pywick.constraints import MaxNorm, NonNeg
from pywick.regularizers import L1Regularizer

# hard constraint applied every 5 batches
hard_constraint = MaxNorm(value=2., frequency=5, unit='batch', module_filter='*fc*')
# implicit constraint added as a penalty term to model loss
soft_constraint = NonNeg(lagrangian=True, scale=1e-3, module_filter='*fc*')
constraints = [hard_constraint, soft_constraint]
trainer.set_constraints(constraints)

regularizers = [L1Regularizer(scale=1e-4, module_filter='*conv*')]
trainer.set_regularizers(regularizers)

You can also fit directly on a torch.utils.data.DataLoader and can have a validation set as well :

from pywick import TensorDataset
from torch.utils.data import DataLoader

train_dataset = TensorDataset(x_train, y_train)
train_loader = DataLoader(train_dataset, batch_size=32)

val_dataset = TensorDataset(x_val, y_val)
val_loader = DataLoader(val_dataset, batch_size=32)

trainer.fit_loader(loader, val_loader=val_loader, num_epoch=100)

Extensive Library of Image Classification Models (most are pretrained!)

Image Segmentation Models

To load one of these models:

Read the docs for useful details! Then dive in:

# use the `get_model` utility
from pywick.models.model_utils import get_model, ModelType

model = get_model(model_type=ModelType.CLASSIFICATION, model_name='resnet18', num_classes=1000, pretrained=True)

For a complete list of models (including many experimental ones) you can call the get_supported_models method e.g. pywick.models.model_utils.get_supported_models(ModelType.SEGMENTATION)

Data Augmentation and Datasets

The PyWick package provides wide variety of good data augmentation and transformation tools which can be applied during data loading. The package also provides the flexible TensorDataset, FolderDataset and MultiFolderDataset classes to handle most dataset needs.

Torch Transforms

These transforms work directly on torch tensors
  • AddChannel
  • ChannelsFirst
  • ChannelsLast
  • Compose
  • ExpandAxis
  • Pad
  • PadNumpy
  • RandomChoiceCompose
  • RandomCrop
  • RandomFlip
  • RandomOrder
  • RangeNormalize
  • Slice2D
  • SpecialCrop
  • StdNormalize
  • ToFile
  • ToNumpyType
  • ToTensor
  • Transpose
  • TypeCast
Additionally, we provide image-specific manipulations directly on tensors:
  • Brightness
  • Contrast
  • Gamma
  • Grayscale
  • RandomBrightness
  • RandomChoiceBrightness
  • RandomChoiceContrast
  • RandomChoiceGamma
  • RandomChoiceSaturation
  • RandomContrast
  • RandomGamma
  • RandomGrayscale
  • RandomSaturation
  • Saturation
Affine Transforms (perform affine or affine-like transforms on torch tensors)
  • RandomAffine
  • RandomChoiceRotate
  • RandomChoiceShear
  • RandomChoiceTranslate
  • RandomChoiceZoom
  • RandomRotate
  • RandomShear
  • RandomSquareZoom
  • RandomTranslate
  • RandomZoom
  • Rotate
  • Shear
  • Translate
  • Zoom

We also provide a class for stringing multiple affine transformations together so that only one interpolation takes place:

  • Affine
  • AffineCompose
Blur and Scramble transforms (for tensors)
  • Blur
  • RandomChoiceBlur
  • RandomChoiceScramble
  • Scramble

Datasets and Sampling

We provide the following datasets which provide general structure and iterators for sampling from and using transforms on in-memory or out-of-memory data. In particular, the FolderDataset has been designed to fit most of your dataset needs. It has extensive options for data filtering and manipulation. It supports loading images for classification, segmentation and even arbitrary source/target mapping. Take a good look at its documentation for more info.

  • ClonedDataset
  • CSVDataset
  • FolderDataset
  • MultiFolderDataset
  • TensorDataset
  • tnt.BatchDataset
  • tnt.ConcatDataset
  • tnt.ListDataset
  • tnt.MultiPartitionDataset
  • tnt.ResampleDataset
  • tnt.ShuffleDataset
  • tnt.TensorDataset
  • tnt.TransformDataset

Imbalanced Datasets

In many scenarios it is important to ensure that your traing set is properly balanced, however, it may not be practical in real life to obtain such a perfect dataset. In these cases you can use the ImbalancedDatasetSampler as a drop-in replacement for the basic sampler provided by the DataLoader. More information can be found here

from pywick.samplers import ImbalancedDatasetSampler

train_loader = torch.utils.data.DataLoader(train_dataset, 
    sampler=ImbalancedDatasetSampler(train_dataset),
    batch_size=args.batch_size, **kwargs)

Utility Functions

PyWick provides a few utility functions not commonly found:

Tensor Functions

  • th_iterproduct (mimics itertools.product)
  • th_gather_nd (N-dimensional version of torch.gather)
  • th_random_choice (mimics np.random.choice)
  • th_pearsonr (mimics scipy.stats.pearsonr)
  • th_corrcoef (mimics np.corrcoef)
  • th_affine2d and th_affine3d (affine transforms on torch.Tensors)

Acknowledgements and References

We stand on the shoulders of (github?) giants and couldn't have done this without the rich github ecosystem and community. This framework is based in part on the excellent Torchsample framework originally published by @ncullen93. Additionally, many models have been gently borrowed/modified from @Cadene pretrained models repo as well as @Tramac segmentation repo.

Thank you to the following people and the projects they maintain:
  • @ncullen93
  • @cadene
  • @deallynomore
  • @recastrodiaz
  • @zijundeng
  • @Tramac
  • And many others! (attributions listed in the codebase as they occur)
Thank you to the following projects from which we gently borrowed code and models
Thangs are broken matey! Arrr!!!
We're working on this project as time permits so you might discover bugs here and there. Feel free to report them, or better yet, to submit a pull request!
Comments
  • Sample Tutorial

    Sample Tutorial

    Hi ,

    Thank you for creating this wonderful package!

    I am having trouble in creating data model pipe line. Could you please create a simple example using say cat/dog data and describe how data pipe line would work using pywick. I think this would be helpful to people who want to quick start with this package.

    Thanks

    opened by saurabh502 4
  • BCEDiceFocalLoss problem

    BCEDiceFocalLoss problem

    In the file, the dice is defined as a BCE loss, which seems to be wrong. def init(self, l=0.5, weight_of_focal=1.): super(BCEDiceFocalLoss, self).init() # self.bce = BCELoss2d() # self.dice = SoftDiceLoss() self.dice = BCELoss2d() self.focal = FocalLoss(l=l) self.weight_of_focal = weight_of_focal

    opened by liuzhiyangnku 3
  • Can not import Dataset

    Can not import Dataset

    Traceback (most recent call last):
      File "test_datasets.py", line 3, in <module>
        import pywick.datasets.tnt.dataset as dataset
      File "/data/pywick/pywick/datasets/tnt/dataset.py", line 1, in <module>
        from pywick.datasets.tnt.batchdataset import BatchDataset
      File "/data/pywick/pywick/datasets/tnt/batchdataset.py", line 2, in <module>
        from pywick.datasets.tnt.dataset import Dataset
    ImportError: cannot import name 'Dataset'
    
    opened by wangg12 3
  • DUNet Failed to Download.

    DUNet Failed to Download.

    Hello,

    I am trying to run your implementation of DUNet and it fails to initialize the model.

    The code I ran was: `from pywick.models.segmentation import DUNet_Resnet50

    img = torch.randn(2, 3, 256, 256) model = DUNet_Resnet50() outputs = model(img) `

    which threw this error:

    Model file /root/.torch/models/resnet50-25c4b509.pth is not found. Downloading. Downloading /root/.torch/models/resnet50-25c4b509.zip from https://hangzh.s3.amazonaws.com/encoding/models/resnet50-25c4b509.zip...

    RuntimeError Traceback (most recent call last) in () 1 img = torch.randn(2, 3, 256, 256) ----> 2 model = DUNet_Resnet50() 3 outputs = model(img)

    6 frames /usr/local/lib/python3.7/dist-packages/pywick/models/segmentation/dunet.py in DUNet_Resnet50(num_classes, **kwargs) 138 139 def DUNet_Resnet50(num_classes=1, **kwargs): --> 140 return get_dunet(num_classes=num_classes, backbone='resnet50', **kwargs) 141 142

    /usr/local/lib/python3.7/dist-packages/pywick/models/segmentation/dunet.py in get_dunet(num_classes, backbone, pretrained, **kwargs) 133 pretrained : bool (default: True) - whether to load pretrained backbone network, that was trained on ImageNet. 134 """ --> 135 model = DUNet(num_classes=num_classes, backbone=backbone, pretrained=pretrained, **kwargs) 136 return model 137

    /usr/local/lib/python3.7/dist-packages/pywick/models/segmentation/dunet.py in init(self, num_classes, pretrained, backbone, aux, **kwargs) 26 27 def init(self, num_classes, pretrained=True, backbone='resnet101', aux=False, **kwargs): ---> 28 super(DUNet, self).init(num_classes, pretrained=pretrained, aux=aux, backbone=backbone, **kwargs) 29 self.head = _DUHead(2144, **kwargs) 30 self.dupsample = DUpsampling(256, num_classes, scale_factor=8, **kwargs)

    /usr/local/lib/python3.7/dist-packages/pywick/models/segmentation/da_basenets/segbase.py in init(self, num_classes, pretrained, aux, backbone, **kwargs) 21 self.nclass = num_classes 22 if backbone == 'resnet50': ---> 23 self.pretrained = resnet50_v1s(pretrained=pretrained, **kwargs) 24 elif backbone == 'resnet101': 25 self.pretrained = resnet101_v1s(pretrained=pretrained, **kwargs)

    /usr/local/lib/python3.7/dist-packages/pywick/models/segmentation/da_basenets/resnetv1b.py in resnet50_v1s(pretrained, model_root, **kwargs) 236 if pretrained: 237 from .model_store import get_resnet_file --> 238 model.load_state_dict(torch.load(get_resnet_file('resnet50', root=model_root)), strict=False) 239 return model 240

    /usr/local/lib/python3.7/dist-packages/pywick/models/segmentation/da_basenets/model_store.py in get_resnet_file(name, root) 49 download(_url_format.format(repo_url=repo_url, file_name=file_name), 50 path=zip_file_path, ---> 51 overwrite=True) 52 with zipfile.ZipFile(zip_file_path) as zf: 53 zf.extractall(root)

    /usr/local/lib/python3.7/dist-packages/pywick/models/segmentation/da_basenets/download.py in download(url, path, overwrite, sha1_hash) 66 r = requests.get(url, stream=True) 67 if r.status_code != 200: ---> 68 raise RuntimeError("Failed downloading url %s"%url) 69 total_length = r.headers.get('content-length') 70 with open(fname, 'wb') as f:

    RuntimeError: Failed downloading url https://hangzh.s3.amazonaws.com/encoding/models/resnet50-25c4b509.zip

    The same happens with other DUNet backbones. I am not sure if other models have similar issues, but it seems to be a simple access problem.

    opened by arelhossan 1
  • Implementation of ARiA-2

    Implementation of ARiA-2

    Hi!

    Thanks alot for the inclusion of ARiA in your library! The implementation of ARiA in the library is correct, however I feel for ARiA-2 there were substitution errors which led to an incorrect formula. If you were to substitute and reduce (for speedup) you end up getting the following form: ARiA2(x) = xsigmoid(betax)**alpha which can be implemented as

    ` class Aria2(nn.Module): """ ARiA2 activation function, a special case of ARiA, for ARiA = f(x, 1, 0, 1, 1, b, 1/a) """

    def __init__(self, a=1.5, b = 1.):
        super(Aria2, self).__init__()
        self.alpha = a
        self.beta = b
    
    def forward(self, x):
        return x * torch.sigmoid(self.beta*x)**(self.alpha)
    

    `

    opened by NarendraPatwardhan 1
  • TRAIN AND VALIDATION LOSS SHOWING NAN

    TRAIN AND VALIDATION LOSS SHOWING NAN

    Hello, first of all, I would like to appreciate the work you are doing. I was trying to use the FocalLoss for Multi-Class problem and tried using FocalLoss2. It shows nan values for train and validation loss, and when I keep the gamma to 2, the training fails after one epoch while when gamma = 0(equal to cross_entropy_loss) it is working but still the loss values are nan. Please help me

    opened by Divyanshupy 1
  • Problem with PolyConv2d, forward fucntion

    Problem with PolyConv2d, forward fucntion

    Hi, and thanks for your code I just try to implement this on keras for a learning purpose, but as i do this i cannot understand block_index, so i think you forgot using for loop or how you pass block_index to it:

    class PolyConv2d(nn.Module):
        def forward(self, x, block_index):
            x = self.conv(x)
            bn = self.bn_blocks[block_index]
            x = bn(x)
            x = self.relu(x)
            return x
    

    please let me know if I was wrong about this one

    opened by pykeras 1
  • num_inputs?

    num_inputs?

    Pytorch datasets do not require a num_inputs or num_targets field.

    https://github.com/achaiah/pywick/blob/master/pywick/modules/module_trainer.py#L387

    opened by jfemiani 1
  • Custom metric doesn't show up in fit_loader()

    Custom metric doesn't show up in fit_loader()

    I wrote the following custom metric,

    class F1Score(pywick.metrics.Metric):
        def __init__(self, micmac='micro'):
            super().__init__()
            self._name = f'f1_{micmac}'
            self.micmac = micmac
            self.correct = []
            self.preds = []
    
        def reset(self):
            self.correct = []
            self.preds = []
    
        def __call__(self, inputs, y_pred, y_true, is_val=False):
            self.correct.extend(y_true.detach().to('cpu').numpy().tolist())
            self.preds.extend(y_pred.detach().to('cpu').numpy().argmax(axis=1).tolist())
            f1 = sklearn.metrics.f1_score(self.correct, self.preds, average=self.micmac)
            return f1
    

    and use it like so

    metrics = [F1Score('micro'), CategoricalAccuracySingleInput(top_k=1)]
    trainer.compile(optimizer='adam',
                    criterion='cross_entropy',
                    initializers=initializers,
                    metrics=metrics)
    

    But while running trainer.fit_loader(...), I get the following output

    Epoch 1/1:   7%|▋         | 7/98 [00:12<02:46,  1.83s/ batches, loss=0.6508, top_1:acc=76.04, learn_rates=[0.001]]
    

    No mention of the custom metric anywhere!

    But running trainer.eval_loader() gives,

    {'val_f1_micro': 0.7514395393474088,
     'val_loss': 0.6613252784975586,
     'val_top_1:acc_metric': 75.14395393474088}
    

    Which works fine, so what's going on with fit_loader()?

    opened by arciel 1
Releases(v0.6.5)
  • v0.6.5(Oct 22, 2021)

    Another great improvement to the framework - docker! You can now run the 17flowers demo right out of the box!

    • Grab our docker image at docker hub: docker pull achaiah/pywick:latest. Pytorch 1.8 and cuda dependencies are pre-installed.
    • Run 17flowers demo with: docker run --rm -it --ipc=host -v your_local_out_dir:/jobs/17flowers --init -e demo=true achaiah/pywick:latest
    • Or run the container in standalone mode so you can use your own data (don't forget to map your local dir to container):
    docker run --rm -it \
    --ipc=host \
    -v <your_local_data_dir>:<container_data_dir> \
    -v <your_local_out_dir>:<container_out_dir> \
    --init \
    achaiah/pywick:latest
    
    Source code(tar.gz)
    Source code(zip)
  • v0.6.0(Oct 11, 2021)

    Huge release with new functionality and models!

    • Complete configuration support via YAML files. Run your training without writing a single line of code!
    • Classification training example with a fully functional YAML config.
    • 700+ classification models.
    • Improvements to code-base via deepsource.
    • New Loss functions.
    • New Segmentation models.
    Source code(tar.gz)
    Source code(zip)
  • v0.5.6(Jan 20, 2020)

  • v0.5.5(Jan 15, 2020)

    Added ~50 new models (including many variants of efficientnet, mixnet, mnasnet etc). SoTA activation function (Mish) New otimizers (Ralamb, Ranger, Lookahead)

    Source code(tar.gz)
    Source code(zip)
  • v0.5.4(Sep 23, 2019)

    Major changes (see readme for details):

    • Added many new segmentation models (most are pretrained)
    • Added new optimizers
    • Added new loss functions
    • Improved model loading logic
    • Various bug fixes
    Source code(tar.gz)
    Source code(zip)
  • v0.5.3(May 8, 2019)

  • v0.5.2(Mar 28, 2019)

  • v0.5.1(Mar 25, 2019)

Owner
Researching applications of AI to everyday life
PyTorch implementations of normalizing flow and its variants.

PyTorch implementations of normalizing flow and its variants.

Tatsuya Yatagawa 55 Dec 01, 2022
Tez is a super-simple and lightweight Trainer for PyTorch. It also comes with many utils that you can use to tackle over 90% of deep learning projects in PyTorch.

Tez: a simple pytorch trainer NOTE: Currently, we are not accepting any pull requests! All PRs will be closed. If you want a feature or something does

abhishek thakur 1.1k Jan 04, 2023
PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions

glow-pytorch PyTorch implementation of Glow, Generative Flow with Invertible 1x1 Convolutions

Kim Seonghyeon 433 Dec 27, 2022
A pure Python implementation of Compact Bilinear Pooling and Count Sketch for PyTorch.

Compact Bilinear Pooling for PyTorch. This repository has a pure Python implementation of Compact Bilinear Pooling and Count Sketch for PyTorch. This

Grégoire Payen de La Garanderie 234 Dec 07, 2022
Differentiable SDE solvers with GPU support and efficient sensitivity analysis.

PyTorch Implementation of Differentiable SDE Solvers This library provides stochastic differential equation (SDE) solvers with GPU support and efficie

Google Research 1.2k Jan 04, 2023
A few Windows specific scripts for PyTorch

It is a repo that contains scripts that makes using PyTorch on Windows easier. Easy Installation Update: Starting from 0.4.0, you can go to the offici

408 Dec 15, 2022
A collection of extensions and data-loaders for few-shot learning & meta-learning in PyTorch

Torchmeta A collection of extensions and data-loaders for few-shot learning & meta-learning in PyTorch. Torchmeta contains popular meta-learning bench

Tristan Deleu 1.7k Jan 06, 2023
Reformer, the efficient Transformer, in Pytorch

Reformer, the Efficient Transformer, in Pytorch This is a Pytorch implementation of Reformer https://openreview.net/pdf?id=rkgNKkHtvB It includes LSH

Phil Wang 1.8k Jan 06, 2023
A Pytorch Implementation for Compact Bilinear Pooling.

CompactBilinearPooling-Pytorch A Pytorch Implementation for Compact Bilinear Pooling. Adapted from tensorflow_compact_bilinear_pooling Prerequisites I

169 Dec 23, 2022
Official implementations of EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis.

EigenDamage: Structured Pruning in the Kronecker-Factored Eigenbasis This repo contains the official implementations of EigenDamage: Structured Prunin

Chaoqi Wang 107 Apr 20, 2022
PyTorch extensions for fast R&D prototyping and Kaggle farming

Pytorch-toolbelt A pytorch-toolbelt is a Python library with a set of bells and whistles for PyTorch for fast R&D prototyping and Kaggle farming: What

Eugene Khvedchenya 1.3k Jan 05, 2023
Training RNNs as Fast as CNNs (https://arxiv.org/abs/1709.02755)

News SRU++, a new SRU variant, is released. [tech report] [blog] The experimental code and SRU++ implementation are available on the dev branch which

ASAPP Research 2.1k Jan 01, 2023
PyTorch implementation of TabNet paper : https://arxiv.org/pdf/1908.07442.pdf

README TabNet : Attentive Interpretable Tabular Learning This is a pyTorch implementation of Tabnet (Arik, S. O., & Pfister, T. (2019). TabNet: Attent

DreamQuark 2k Dec 27, 2022
PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference

PyTorch implementation of [1611.06440 Pruning Convolutional Neural Networks for Resource Efficient Inference] This demonstrates pruning a VGG16 based

Jacob Gildenblat 836 Dec 26, 2022
A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.

A PyTorch repo for data loading and utilities to be shared by the PyTorch domain libraries.

878 Dec 30, 2022
Distiller is an open-source Python package for neural network compression research.

Wiki and tutorials | Documentation | Getting Started | Algorithms | Design | FAQ Distiller is an open-source Python package for neural network compres

Intel Labs 4.1k Dec 28, 2022
PyTorch Extension Library of Optimized Scatter Operations

PyTorch Scatter Documentation This package consists of a small extension library of highly optimized sparse update (scatter and segment) operations fo

Matthias Fey 1.2k Jan 07, 2023
Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Unofficial PyTorch implementation of DeepMind's Perceiver IO with PyTorch Lightning scripts for distributed training

Martin Krasser 251 Dec 25, 2022
Pretrained EfficientNet, EfficientNet-Lite, MixNet, MobileNetV3 / V2, MNASNet A1 and B1, FBNet, Single-Path NAS

(Generic) EfficientNets for PyTorch A 'generic' implementation of EfficientNet, MixNet, MobileNetV3, etc. that covers most of the compute/parameter ef

Ross Wightman 1.5k Jan 01, 2023
Implements pytorch code for the Accelerated SGD algorithm.

AccSGD This is the code associated with Accelerated SGD algorithm used in the paper On the insufficiency of existing momentum schemes for Stochastic O

205 Jan 02, 2023