Toolbox of models, callbacks, and datasets for AI/ML researchers.

Overview

Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch


WebsiteInstallationMain goalslatest Docsstable DocsCommunityGrid AILicence

PyPI Status PyPI Status codecov CodeFactor

Documentation Status Slack Discourse status license


Continuous Integration

System / PyTorch ver. 1.6 (min. req.) 1.7 (latest)
Linux py3.{6,8} CI full testing CI full testing
OSX py3.{6,8} CI full testing CI full testing
Windows py3.7* CI base testing CI base testing
  • * testing just the package itself, we skip full test suite - excluding tests folder

Install

Simple installation from PyPI

pip install pytorch-lightning-bolts

Install bleeding-edge (no guarantees)

pip install git+https://github.com/PytorchLightning/[email protected] --upgrade

In case you want to have full experience you can install all optional packages at once

pip install pytorch-lightning-bolts["extra"]

What is Bolts

Bolts is a Deep learning research and production toolbox of:

  • SOTA pretrained models.
  • Model components.
  • Callbacks.
  • Losses.
  • Datasets.

Main Goals of Bolts

The main goal of Bolts is to enable rapid model idea iteration.

Example 1: Finetuning on data

from pl_bolts.models.self_supervised import SimCLR
from pl_bolts.models.self_supervised.simclr.transforms import SimCLRTrainDataTransform, SimCLREvalDataTransform
import pytorch_lightning as pl

# data
train_data = DataLoader(MyDataset(transforms=SimCLRTrainDataTransform(input_height=32)))
val_data = DataLoader(MyDataset(transforms=SimCLREvalDataTransform(input_height=32)))

# model
weight_path = 'https://pl-bolts-weights.s3.us-east-2.amazonaws.com/simclr/bolts_simclr_imagenet/simclr_imagenet.ckpt'
simclr = SimCLR.load_from_checkpoint(weight_path, strict=False)

simclr.freeze()

# finetune

Example 2: Subclass and ideate

from pl_bolts.models import ImageGPT
from pl_bolts.models.self_supervised import SimCLR

class VideoGPT(ImageGPT):

    def training_step(self, batch, batch_idx):
        x, y = batch
        x = _shape_input(x)

        logits = self.gpt(x)
        simclr_features = self.simclr(x)

        # -----------------
        # do something new with GPT logits + simclr_features
        # -----------------

        loss = self.criterion(logits.view(-1, logits.size(-1)), x.view(-1).long())

        logs = {"loss": loss}
        return {"loss": loss, "log": logs}

Who is Bolts for?

  • Corporate production teams
  • Professional researchers
  • Ph.D. students
  • Linear + Logistic regression heroes

I don't need deep learning

Great! We have LinearRegression and LogisticRegression implementations with numpy and sklearn bridges for datasets! But our implementations work on multiple GPUs, TPUs and scale dramatically...

Check out our Linear Regression on TPU demo

from pl_bolts.models.regression import LinearRegression
from pl_bolts.datamodules import SklearnDataModule
from sklearn.datasets import load_boston
import pytorch_lightning as pl

# sklearn dataset
X, y = load_boston(return_X_y=True)
loaders = SklearnDataModule(X, y)

model = LinearRegression(input_dim=13)

# try with gpus=4!
# trainer = pl.Trainer(gpus=4)
trainer = pl.Trainer()
trainer.fit(model, train_dataloader=loaders.train_dataloader(), val_dataloaders=loaders.val_dataloader())
trainer.test(test_dataloaders=loaders.test_dataloader())

Is this another model zoo?

No!

Bolts is unique because models are implemented using PyTorch Lightning and structured so that they can be easily subclassed and iterated on.

For example, you can override the elbo loss of a VAE, or the generator_step of a GAN to quickly try out a new idea. The best part is that all the models are benchmarked so you won't waste time trying to "reproduce" or find the bugs with your implementation.

Team

Bolts is supported by the PyTorch Lightning team and the PyTorch Lightning community!


Licence

Please observe the Apache 2.0 license that is listed in this repository. In addition the Lightning framework is Patent Pending.

Citation

To cite bolts use:

@article{falcon2020framework,
  title={A Framework For Contrastive Self-Supervised Learning And Designing A New Approach},
  author={Falcon, William and Cho, Kyunghyun},
  journal={arXiv preprint arXiv:2009.00104},
  year={2020}
}

To cite other contributed models or modules, please cite the authors directly (if they don't have bibtex, ping the authors on a GH issue)

Comments
  • Add RetinaNet Object detection with Backbones

    Add RetinaNet Object detection with Backbones

    What does this PR do?

    Fixes #391

    Before submitting

    • [x] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
    • [x] Did you read the contributor guideline, Pull Request section?
    • [x] Did you make sure your PR does only one thing, instead of bundling different changes together?
    • [x] Did you make sure to update the documentation with your changes?
    • [x] Did you write any new necessary tests? [not needed for typos/docs]
    • [x] Did you verify new and existing tests pass locally with your changes?
    • [x] If you made a notable change (that affects users), did you update the CHANGELOG?

    PR review

    • [x] Is this pull request ready for review?

    Did you have fun?

    I think yes :stuck_out_tongue:

    ready model 
    opened by oke-aditya 45
  • Add YOLO object detection model

    Add YOLO object detection model

    What does this PR do?

    This PR adds the YOLO object detection model. The implementation is based on the YOLOv3 and YOLOv4 Darknet implementations, although it doesn't include all the features of YOLOv4. Detection seems to work with weights that have been trained using the Darknet implementation, so the network architecture should be more or less identical. The network architecture is read from a configuration file in the same format as in the Darknet implementation. It supports loading weights from a Darknet model file too, if you don't want to start training from a randomly initialized model.

    Fixes #22

    Before submitting

    • [x] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
    • [x] Did you read the contributor guideline, Pull Request section?
    • [x] Did you make sure your PR does only one thing, instead of bundling different changes together?
    • [x] Did you make sure to update the documentation with your changes?
    • [x] Did you write any new necessary tests? [not needed for typos/docs]
    • [x] Did you verify new and existing tests pass locally with your changes?
    • [x] If you made a notable change (that affects users), did you update the CHANGELOG?

    PR review

    • [x] Is this pull request ready for review? (if not, please submit in draft mode)

    Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

    Did you have fun?

    Make sure you had fun coding 🙃

    enhancement model datamodule 
    opened by senarvi 36
  • Add SRGAN and datamodules for super resolution

    Add SRGAN and datamodules for super resolution

    What does this PR do?

    Adds a SRGAN implementation to bolts as proposed in #412.

    Closes #412

    Before submitting

    • [x] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
    • [x] Did you read the contributor guideline, Pull Request section?
    • [x] Did you make sure your PR does only one thing, instead of bundling different changes together? Otherwise, we ask you to create a separate PR for every change.
    • [x] Did you make sure to update the documentation with your changes?
    • [x] Did you write any new necessary tests?
    • [x] Did you verify new and existing tests pass locally with your changes?
    • [x] If you made a notable change (that affects users), did you update the CHANGELOG?
    • [x] Add train logs and example images

    PR review

    • [x] Is this pull request ready for review? (if not, please submit in draft mode)

    Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

    Did you have fun?

    Make sure you had fun coding 🙃

    ready model datamodule 
    opened by chris-clem 31
  • Adding types to some of datamodules

    Adding types to some of datamodules

    What does this PR do?

    Adding types to pl_bolts.datamodules.

    related to #434

    Before submitting

    • [x] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
    • [x] Did you read the contributor guideline, Pull Request section?
    • [x] Did you make sure your PR does only one thing, instead of bundling different changes together? Otherwise, we ask you to create a separate PR for every change.
    • [ ] Did you make sure to update the documentation with your changes?
    • [ ] Did you write any new necessary tests?
    • [ ] Did you verify new and existing tests pass locally with your changes?
    • [ ] If you made a notable change (that affects users), did you update the CHANGELOG?

    PR review

    • [ ] Is this pull request ready for review? (if not, please submit in draft mode)

    Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

    Did you have fun?

    Make sure you had fun coding 🙃

    Priority datamodule refactoring 
    opened by briankosw 25
  • Add DCGAN module

    Add DCGAN module

    What does this PR do?

    As proposed in #401, this PR adds a DCGAN implementation closely following the one in PyTorch's examples (https://github.com/pytorch/examples/blob/master/dcgan/main.py).

    Fixes #401

    Before submitting

    • [x] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
    • [x] Did you read the contributor guideline, Pull Request section?
    • [x] Did you make sure your PR does only one thing, instead of bundling different changes together? Otherwise, we ask you to create a separate PR for every change.
    • [x] Did you make sure to update the documentation with your changes?
    • [x] Did you write any new necessary tests?
    • [x] Did you verify new and existing tests pass locally with your changes?
    • [x] If you made a notable change (that affects users), did you update the CHANGELOG?

    PR review

    • [x] Is this pull request ready for review? (if not, please submit in draft mode)

    Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

    Did you have fun?

    Make sure you had fun coding 🙃

    enhancement ready model 
    opened by chris-clem 24
  • Add EMNISTDataModule

    Add EMNISTDataModule

    What does this PR do?

    Closes #672, #676 and #685.

    A summary of changes and modifications :star: :fire: [CLICK TO EXPAND]

    • File Added:

      • [x] pl_bolts/datasets/emnist_dataset.py :green_circle:
      • [x] Contents:
        • [x] EMNIST_METADATA
        • [x] EMNIST dataset
        • [x] BinaryEMNIST dataset Need New PR or add to #672 :warning:
    • File Added:

      • [x] pl_bolts/datamodules/emnist_dataset.py :green_circle:
      • [x] Contents:
        • [x] EMNISTDataModule
        • [x] BinaryEMNISTDataModule Need New PR or add to #672 :warning:
    • Files Modified

      • Package: pl_bolts

        • [x] pl_bolts/datasets/__init__.py :green_circle:
        • [x] pl_bolts/datamodules/__init__.py :green_circle:
      • Tests:

        • For datamodules:
          • [x] tests/datamodules/test_imports.py :green_circle:
          • [x] tests/datamodules/test_datamodules.py WIP :orange_circle:

    Adding BinaryEMNIST and BinaryEMNISTDataModule was logical, looking at how MNIST and BinaryMNIST (dataset and datamodules) were implemented.

    About the dataset

    image source: https://arxiv.org/pdf/1702.05373.pdf [Table-I]

    image source: https://arxiv.org/pdf/1702.05373.pdf [Table-II]

    Before submitting

    • [x] Was this discussed/approved via a Github issue? (no need for typos and docs improvements) #672
    • [x] Did you read the contributor guideline, Pull Request section? Y :green_circle:
    • [x] Did you make sure your PR does only one thing, instead of bundling different changes together? Y :green_circle:
    • [x] Did you make sure to update the documentation with your changes? Y :green_circle:
    • [x] Did you write any new necessary tests? [not needed for typos/docs] Y :green_circle:
    • [x] Did you verify new and existing tests pass locally with your changes? Y :green_circle:
    • [x] If you made a notable change (that affects users), did you update the CHANGELOG? Y :green_circle:

    PR review

    • [x] Is this pull request ready for review? (if not, please submit in draft mode) READY :green_circle:

    Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

    ready datamodule 
    opened by sugatoray 19
  • Implemented GIoU

    Implemented GIoU

    What does this PR do?

    Implements Generalized Intersection over Union as mentioned in #251

    Before submitting

    • [x] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
    • [x] Did you read the contributor guideline, Pull Request section?
    • [x] Did you make sure your PR does only one thing, instead of bundling different changes together? Otherwise, we ask you to create a separate PR for every change.
    • [x] Did you make sure to update the documentation with your changes?
    • [x] Did you write any new necessary tests?
    • [x] Did you verify new and existing tests pass locally with your changes?
    • [x] If you made a notable change (that affects users), did you update the CHANGELOG?

    PR review

    • [x] Is this pull request ready for review? (if not, please submit in draft mode)

    Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

    Did you have fun?

    Make sure you had fun coding 🙃

    enhancement 
    opened by briankosw 19
  • Call for core contributors 🧙

    Call for core contributors 🧙

    🚀 Feature

    First, we are very happy about all your contribution that the community-made so far! Unfortunately, we are getting a bit short :( Second, we would like to re-ignite this project/repository and get it back on track with the latest research and PL API! Third, as part o the challenge, we are going to rethink the structure and integration process to be up to date and smooth as possible (also see complementary issue #741)

    Motivation

    We want to form a new contributor's team which would be willing to take (participate) this challenge of re-igniting this project in the best Lightning spirit!

    Pitch

    Become a key contributor, collaborate with the best, learn and practice what you love and help us make the Lightning community an even better place!

    Alternatives

    Ping @Borda on slack to chat more...

    Additional context

    Note that to be part of Bolt's core is not the same group as being a Core contributor of the main PL, but it will set you on a promising track to become PL core later on...

    enhancement help wanted won't fix discussion 
    opened by Borda 18
  • ci: Fix possible OOM error `Process completed with exit code 137`

    ci: Fix possible OOM error `Process completed with exit code 137`

    🐛 Bug

    Seems CI full testing / pytest (ubuntu-20.04, *, *) particularly tend to fail with the error:

    /home/runner/work/_temp/5ef79e81-ccef-44a4-91a6-610886c324a6.sh: line 2:  1855 Killed                  coverage run --source pl_bolts -m pytest pl_bolts tests --exitfirst -v --junitxml=junit/test-results-Linux-3.7-latest.xml
    Error: Process completed with exit code 137.
    

    Example CI runs

    • https://github.com/PyTorchLightning/pytorch-lightning-bolts/runs/1459479942
    • https://github.com/PyTorchLightning/pytorch-lightning-bolts/runs/1459753659
    • https://github.com/PyTorchLightning/pytorch-lightning-bolts/runs/1459754977

    This error might happen on different os or different versions. Haven't investigated yet.

    To Reproduce

    Not sure how to reproduce...

    Additional context

    Found while handling the dataset caching issue in https://github.com/PyTorchLightning/pytorch-lightning-bolts/pull/387#issuecomment-734396787.

    bug help wanted ci/cd 
    opened by akihironitta 18
  • ci: Fix dataset downloading errors

    ci: Fix dataset downloading errors

    What does this PR do?

    As pointed out in https://github.com/PyTorchLightning/pytorch-lightning-bolts/pull/377#issuecomment-730193148 by @Borda, the tests try to download datasets, which sometimes fail with the following error:

    UNEXPECTED EXCEPTION: RuntimeError('Failed download from https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz')

    Description of the changes

    1. ~It seems that those failing tests are often doctest, so this PR simply removes the doctest from ci_test-full.yml as we still have doctest in ci_test-base.yml.~ ~https://github.com/PyTorchLightning/pytorch-lightning-bolts/blob/b8ac85154465956b06fd1005b21b071af5493f11/.github/workflows/ci_test-full.yml#L86~ ~https://github.com/PyTorchLightning/pytorch-lightning-bolts/blob/b8ac85154465956b06fd1005b21b071af5493f11/.github/workflows/ci_test-base.yml#L69~
    2. ~This PR also includes minor changes in some tests using LitMNIST to utilize dataset caching since they currently download and store MNIST datasets in ./ instead of in ./datasets/ (datadir fixture).~ See #414.

    Before submitting

    • [x] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
    • [x] Did you read the contributor guideline, Pull Request section?
    • [x] Did you make sure your PR does only one thing, instead of bundling different changes together? Otherwise, we ask you to create a separate PR for every change.
    • [x] Did you make sure to update the documentation with your changes?
    • [ ] Did you write any new necessary tests?
    • [ ] Did you verify new and existing tests pass locally with your changes?
    • [x] If you made a notable change (that affects users), did you update the CHANGELOG?

    PR review

    • [ ] Is this pull request ready for review? (if not, please submit in draft mode)

    Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

    Did you have fun?

    Make sure you had fun coding 🙃

    bug ci/cd datamodule 
    opened by akihironitta 17
  • Adds Backbones to FRCNN Take 2

    Adds Backbones to FRCNN Take 2

    What does this PR do?

    Before submitting

    • [x] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
    • [x] Did you read the contributor guideline, Pull Request section?
    • [x] Did you make sure your PR does only one thing, instead of bundling different changes together?
    • [x] Did you make sure to update the documentation with your changes?
    • [x] Did you write any new necessary tests? [not needed for typos/docs]
    • [x] Did you verify new and existing tests pass locally with your changes?
    • [x] If you made a notable change (that affects users), did you update the CHANGELOG?

    PR review

    • [x] Is this pull request ready for review? (if not, please submit in draft mode)

    Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

    Did you have fun?

    Make sure you had fun coding 🙃

    Redo #382 . Closes #340

    ready model 
    opened by oke-aditya 16
  • Revision pl_bolts.datamodules.cityscapes_datamodule.CityscapesDataModule

    Revision pl_bolts.datamodules.cityscapes_datamodule.CityscapesDataModule

    What does this PR do?

    Related to #839

    • update docstring
    • add data type color, polygon along with torchvision cityscape
    • add train extra dataloader for coarse dataset
    • add color data type test

    Before submitting

    • [x] Was this discussed/approved via a Github issue? (no need for typos and docs improvements)
    • [x] Did you read the contributor guideline, Pull Request section?
    • [x] Did you make sure your PR does only one thing, instead of bundling different changes together?
    • [x] Did you make sure to update the documentation with your changes?
    • [x] Did you write any new necessary tests? [not needed for typos/docs]
    • [x] Did you verify new and existing tests pass locally with your changes?
    • [ ] If you made a notable change (that affects users), did you update the CHANGELOG?

    PR review

    • [x] Is this pull request ready for review? (if not, please submit in draft mode)

    Anyone in the community is free to review the PR once the tests have passed. If we didn't discuss your PR in Github issues there's a high chance it will not be merged.

    Did you have fun?

    Make sure you had fun coding 🙃

    datamodule 
    opened by lijm1358 0
  • PrintTableMetricsCallback does not handle metrics with periods

    PrintTableMetricsCallback does not handle metrics with periods

    🐛 Bug

    When logged metrics have a period, the PrintTableMetricsCallback produces a KeyError.

    To Reproduce

    Code sample

    from pl_bolts.callbacks.printing import dicts_to_table
    
    print(dicts_to_table([{"metrics/class.a": 0.5}]))
    

    Expected behavior

    metrics/class.a
    ---------------
    0.5
    

    Actual behavior

    Traceback (most recent call last):
      File "/home/coder/temp.py", line 3, in <module>
        dicts_to_table([{"metrics/class.a": 0.5}])
      File "/home/coder/.direnv/python-3.8.10/lib/python3.8/site-packages/pl_bolts/utils/stability.py", line 87, in wrapper
        return cls_or_callable(*args, **kwargs)
      File "/home/coder/.direnv/python-3.8.10/lib/python3.8/site-packages/pl_bolts/callbacks/printing.py", line 129, in dicts_to_table
        line = s.format(**d, **marked_values)
    KeyError: 'metrics/class'
    

    Environment

    • PyTorch Version (e.g., 1.0): 1.12.1+cu116
    • OS (e.g., Linux): Linux
    • How you installed PyTorch (conda, pip, source): pip
    • Build command you used (if compiling from source): N/A
    • Python version: 3.8
    • CUDA/cuDNN version: 11.6
    • GPU models and configuration: N/A
    • Any other relevant information: N/A

    Additional context

    The underlying problem is here, python format strings cannot have a period in variable names. To reproduce underlying issue:

    "{metrics/class.a}".format(**{"metrics/class.a": 0.5})
    
    help wanted 
    opened by anaoum 0
  • Update scikit-learn requirement from <=1.1.3,>=1.0.2 to >=1.0.2,<1.2.1 in /requirements

    Update scikit-learn requirement from <=1.1.3,>=1.0.2 to >=1.0.2,<1.2.1 in /requirements

    Updates the requirements on scikit-learn to permit the latest version.

    Release notes

    Sourced from scikit-learn's releases.

    Scikit-learn 1.2.0

    We're happy to announce the 1.2.0 release.

    You can read the release highlights under https://scikit-learn.org/stable/auto_examples/release_highlights/plot_release_highlights_1_2_0.html and the long version of the change log under https://scikit-learn.org/stable/whats_new/v1.2.html

    This version supports Python versions 3.8 to 3.11.

    Commits

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    ci/cd 
    opened by dependabot[bot] 1
  • SSLOnlineEvaluator does not work with DDP

    SSLOnlineEvaluator does not work with DDP

    🐛 Bug

    In commit 6e14209185c2b2100f3e515ee6782597673bb921 on pytorch_lightning from Feb 17, the use_ddp property was removed from AcceleratorConnector.

    In commit b29b07e9788311326bca4779d70e89eb36bfc57f on pytorch_lightning from Feb 27, the use_dp property was removed from AcceleratorConnector.

    The SSLOnlineEvaluator now throws exceptions with multiple GPUs since it checks for these properties in distributed training.

    To Reproduce

    Steps to reproduce the behavior:

    Must run on a system with 2+ GPUs attached and accessible to PyTorch.

    1. Create a pl.Trainer
    2. Attach an SSLOnlineEvaluator Callback
    3. Call trainer.fit

    Code sample:

    import torch
    import pytorch_lightning as pl
    import pl_bolts
    
    
    def main():
        zdim = 2048
        bs = 8
    
        ds = pl_bolts.datasets.DummyDataset(
                (3, 224, 224),
                (1, ),
                num_samples = 100
        )
        dl = torch.utils.data.DataLoader(ds, batch_size=bs)
    
        model = pl_bolts.models.self_supervised.SimCLR(
                gpus = torch.cuda.device_count(),
                num_samples = len(ds),
                batch_size = bs,
                dataset = 'custom',
                hidden_mlp = zdim,
        )
    
    # fit
        trainer = pl.Trainer(
            accelerator = 'gpu',
            devices = -1,
            callbacks = [
                pl_bolts.callbacks.SSLOnlineEvaluator(
                    z_dim = zdim,
                    num_classes = 4, # or any other number
                    hidden_dim = None,
                    dataset = 'custom'
                ),
            ],
        )
    
        trainer.fit(model, train_dataloaders = dl)
    if __name__ == '__main__':
        main()
    

    Leads to the following

    Traceback (most recent call last):
      File "example.py", line 41, in <module>
        main()
      File "example.py", line 39, in main
        trainer.fit(model, train_dataloaders = dl)
      File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 604, in fit
        self, self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
      File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/call.py", line 36, in _call_and_handle_interrupt
        return trainer.strategy.launcher.launch(trainer_fn, *args, trainer=trainer, **kwargs)
      File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/strategies/launchers/multiprocessing.py", line 117, in launch
        start_method=self._start_method,
      File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 198, in start_processes
        while not context.join():
      File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 160, in join
        raise ProcessRaisedException(msg, error_index, failed_process.pid)
    torch.multiprocessing.spawn.ProcessRaisedException:
    
    -- Process 1 terminated with the following error:
    Traceback (most recent call last):
      File "/opt/conda/lib/python3.7/site-packages/torch/multiprocessing/spawn.py", line 69, in _wrap
        fn(i, *args)
      File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/strategies/launchers/multiprocessing.py", line 139, in _wrapping_function
        results = function(*args, **kwargs)
      File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 645, in _fit_impl
        self._run(model, ckpt_path=self.ckpt_path)
      File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1083, in _run
        self._call_callback_hooks("on_fit_start")
      File "/opt/conda/lib/python3.7/site-packages/pytorch_lightning/trainer/trainer.py", line 1380, in _call_callback_hooks
        fn(self, self.lightning_module, *args, **kwargs)
      File "/opt/conda/lib/python3.7/site-packages/pl_bolts/callbacks/ssl_online.py", line 87, in on_fit_start
        if accel.use_ddp:
    AttributeError: 'AcceleratorConnector' object has no attribute 'use_ddp'
    

    Expected behavior

    Environment

    • PyTorch Version (e.g., 1.0): '1.13.0+cu117'
    • Lightning version: '1.8.4.post0'
    • pl_bolts version: '0.6.0.post1'
    • OS (e.g., Linux): Docker (Ubuntu base)
    • How you installed PyTorch (conda, pip, source): Pytorch Docker image
    • Python version: 3.7.11
    • CUDA/cuDNN version: 11.7
    • GPU models and configuration: 2 A10s, 24GB VRAM each

    Additional context

    Currently have it patched in personal system as follows using the old definition of the use_ddp property prior to removal:

        from pytorch_lightning.trainer.connectors.accelerator_connector import _LITERAL_WARN, AcceleratorConnector
        AcceleratorConnector.use_ddp = lambda self: self._strategy_type in (
                _StrategyType.BAGUA,
                _StrategyType.DDP,
                _StrategyType.DDP_SPAWN,
                _StrategyType.DDP_SHARDED,
                _StrategyType.DDP_SHARDED_SPAWN,
                _StrategyType.DDP_FULLY_SHARDED,
                _StrategyType.DEEPSPEED,
                _StrategyType.TPU_SPAWN,
            )
    
    help wanted 
    opened by shubhamkulkarni01 0
  • Filterwarning in under-review decorator

    Filterwarning in under-review decorator

    The custom filterwarning in the under-review decorator leads to global rules being overwritten:

    https://github.com/Lightning-AI/lightning-bolts/blob/c26c8d8f407de386038d5fb13297233a8aa052e7/pl_bolts/utils/stability.py#L75

    Due to this, the following will still print the warnings (multiple times):

    import warnings
    
    warnings.simplefilter("ignore")  # This should ignore all warnings
    warnings.warn("test")  # "test" is ignored
    import pl_bolts  # Raises multiple "UnderReviewWarning"
    
    opened by braun-steven 0
Releases(0.6.0.post1)
  • 0.6.0.post1(Dec 16, 2022)

    What's Changed

    • resolve require collisions by @Borda in https://github.com/Lightning-AI/lightning-bolts/pull/938
    • Metrics by @BaruchG in https://github.com/Lightning-AI/lightning-bolts/pull/892

    Full Changelog: https://github.com/Lightning-AI/lightning-bolts/compare/0.6.0...0.6.0.post1

    Source code(tar.gz)
    Source code(zip)
  • 0.6.0(Nov 3, 2022)

    [0.6.0] - 2022-11-03

    Added

    • Updated SparseML callback for latest PyTorch Lightning (#822)

    • Updated torch version to v1.10.X (#815)

    • Dataset specific args method to CIFAR10, ImageNet, MNIST, and STL10 (#890)

    • Migrate to use lightning-utilities (#907)

    • Support PyTorch Lightning v1.8 (#910)

    • Major revision of Bolts

      • under_review flag (#835, #837)
      • Reviewing GAN basics, VisionDataModule, MNISTDataModule, CIFAR10DataModule (#843)
      • Added tests, updated doc-strings for Dummy Datasets (#865)
      • Binary MNIST/EMNIST Datasets and Datamodules (#866)
      • FashionMNIST/EMNIST Datamodules (#871)
      • Revision ArrayDataset (#872)
      • BYOL weight update callback (#867)
      • Revision models.vision.unet, models.vision.segmentation (#880)
      • Revision of SimCLR transforms (#857)
      • Revision Metrics (#878, #887)
      • Revision of BYOL module and tests (#874)
      • Revision of MNIST module (#873)
      • Revision of dataset normalizations (#898)
      • Revision of SimSiam module and tests (#891)
      • Revision datasets.kitti_dataset.KittiDataset (#896)
      • SWAV improvements (#903)
      • minor dcgan-import fix (#921)

    Fixed

    • Removing extra flatten (#809)
    • support number of channels!=3 in YOLOConfiguration (#806)
    • CVE-2007-4559 Patch (#894)

    Contributors

    @ArnolFokam, @Atharva-Phatak, @BaruchG, @Benjamin-Etheredge, @Borda, @Ce11an, @clementpoiret, @kfirgedal, @lijm1358, @matsumotosan, @nishantb06, @otaj, @rohitgr7, @shivammehta25, @TrellixVulnTeam

    If we forgot someone due to not matching commit email with GitHub account, let us know :]

    Source code(tar.gz)
    Source code(zip)
  • 0.5.0(Dec 20, 2021)

    [0.5.0] - 2021-12-20

    Added

    • Added YOLO model (#552)
    • Added SRGAN, SRImageLoggerCallback, TVTDataModule, SRCelebA, SRMNIST, SRSTL10 (#466)
    • Added nn.Module support for FasterRCNN backbone (#661)
    • Added RetinaNet with torchvision backbones (#529)
    • Added Python 3.9 support (#786)

    Changed

    • VAE now uses deterministic KL divergence during training, previously estimated KL divergence by random sampling (#760)

    Removed

    • Removed PyTorch 1.6 support (#786)
    • Removed Python 3.6 support (#785)

    Fixed

    • Fixed doctest fails with ImportError: cannot import name 'Env' from 'gym' (#751)
    • Fixed MoCo v2 missing Cosine Annealing learning scheduler (#757)

    Contributors

    @abhayraw1 @akihironitta @chris-clem @hoangtnm @nmichlo @oke-aditya @Programmer-RD-AI @senarvi

    If we forgot someone due to not matching commit email with GitHub account, let us know :]

    Source code(tar.gz)
    Source code(zip)
  • 0.4.0(Sep 9, 2021)

    [0.4.0] - 2021-09-09

    Added

    • Added Soft Actor Critic (SAC) Model (#627)
    • Added EMNISTDataModule, BinaryEMNISTDataModule, and BinaryEMNIST dataset (#676)
    • Added Advantage Actor-Critic (A2C) Model (#598)
    • Added Torch ORT Callback (#720)
    • Added SparseML Callback (#724)

    Changed

    • Changed the default values pin_memory=False, shuffle=False and num_workers=16 to pin_memory=True, shuffle=True and num_workers=0 of datamodules (#701)
    • Supporting deprecated attribute usage (#699)

    Fixed

    • Fixed ImageNet val loader to use val transform instead of train transform (#713)
    • Fixed the MNIST download giving HTTP 404 with torchvision>=0.9.1 (#674)
    • Removed momentum updating from val step and add separate val queue (#631)
    • Fixed moving the queue to GPU when resuming checkpoint for SwAV model (#684)
    • Fixed FP16 support with vision GPT model (#694)
    • Removing bias from linear model regularisation (#669)
    • Fixed CPC module issue (#680)
    Source code(tar.gz)
    Source code(zip)
  • 0.3.4(Jun 17, 2021)

    [0.3.4] - 2021-06-17

    Changed

    • Replaced load_boston with load_diabetes in the docs and tests (#629)
    • Added base encoder and MLP dimension arguments to BYOL constructor (#637)

    Fixed

    • Fixed the MNIST download giving HTTP 503 (#633)
    • Fixed type annotation of ExperienceSource.__iter__ (#645)
    • Fixed pretrained_urls on Windows (#652)
    • Fixed logistic regression (#655, #664)
    • Fixed double softmax in SSLEvaluator (#663)
    Source code(tar.gz)
    Source code(zip)
  • 0.3.3(Apr 17, 2021)

    [0.3.3] - 2021-04-17

    Changed

    • Suppressed missing package warnings, conditioned by WARN_MISSING_PACKAGE="1" (#617)
    • Updated all scripts to LARS (#613)

    Fixed

    • Add missing dataclass requirements (#618)
    Source code(tar.gz)
    Source code(zip)
  • 0.3.2(Mar 20, 2021)

    [0.3.2] - 2021-03-20

    Changed

    • Renamed SSL modules: CPCV2 >> CPC_v2 and MocoV2 >> Moco_v2 (#585)
    • Refactored setup.py to be typing friendly (#601)
    Source code(tar.gz)
    Source code(zip)
  • 0.3.1(Mar 9, 2021)

    [0.3.1] - 2021-03-09

    Added

    • Added Pix2Pix model (#533)

    Changed

    • Moved vision models (GPT2, ImageGPT, SemSegment, UNet) to pl_bolts.models.vision (#561)

    Fixed

    • Fixed BYOL moving average update (#574)
    • Fixed custom gamma in rl (#550)
    • Fixed PyTorch 1.8 compatibility issue (#580, #579)
    • Fixed handling batchnorms in BatchGradientVerification [#569)
    • Corrected num_rows calculation in LatentDimInterpolator callback (#573)

    Contributors

    @akihironitta, @aniketmaurya, @BartekRoszak, @FlorianMF, @indigoviolet, @kaushikb11, @mxksowie, @wjn0

    If we forgot someone due to not matching commit email with GitHub account, let us know :]

    Source code(tar.gz)
    Source code(zip)
  • 0.3.0(Jan 20, 2021)

    Detail chnages

    Added

    • Added input_channels argument to UNet (#297)
    • Added SwAV (#239, #348, #323)
    • Added data monitor callbacks ModuleDataMonitor and TrainingDataMonitor (#285)
    • Added DCGAN module (#403)
    • Added VisionDataModule as parent class for BinaryMNISTDataModule, CIFAR10DataModule, FashionMNISTDataModule, and MNISTDataModule (#400)
    • Added GIoU loss (#347)
    • Added IoU loss (#469)
    • Added semantic segmentation model SemSegment with UNet backend (#259)
    • Added option to normalize latent interpolation images (#438)
    • Added flags to datamodules (#388)
    • Added metric GIoU (#347)
    • Added Intersection over Union Metric/Loss (#469)
    • Added SimSiam model (#407)
    • Added gradient verification callback (#465)
    • Added Backbones to FRCNN (#475)

    Changed

    • Decoupled datamodules from models (#332, #270)
    • Set PyTorch Lightning 1.0 as the minimum requirement (#274)
    • Moved pl_bolts.callbacks.self_supervised.BYOLMAWeightUpdate to pl_bolts.callbacks.byol_updates.BYOLMAWeightUpdate (#288)
    • Moved pl_bolts.callbacks.self_supervised.SSLOnlineEvaluator to pl_bolts.callbacks.ssl_online.SSLOnlineEvaluator (#288)
    • Moved pl_bolts.datamodules.*_dataset to pl_bolts.datasets.*_dataset (#275)
    • Ensured sync across val/test step when using DDP (#371)
    • Refactored CLI arguments of models (#394)
    • Upgraded DQN to use .log (#404)
    • Decoupled DataModules from models - CPCV2 (#386)
    • Refactored datamodules/datasets (#338)
    • Refactored Vision DataModules (#400)
    • Refactored pl_bolts.callbacks (#477)
    • Refactored the rest of pl_bolts.models.self_supervised (#481, #479)
    • Update [torchvision.utils.make_grid(https://pytorch.org/docs/stable/torchvision/utils.html#torchvision.utils.make_grid)] kwargs to TensorboardGenerativeModelImageSampler (#494)

    Fixed

    • Fixed duplicate warnings when optional packages are unavailable (#341)
    • Fixed ModuleNotFoundError when importing datamoules (#303)
    • Fixed cyclic imports in pl_bolts.utils.self_suprvised (#350)
    • Fixed VAE loss to use KL term of ELBO (#330)
    • Fixed dataloders of MNISTDataModule to use self.batch_size (#331)
    • Fixed missing outputs in SSL hooks for PyTorch Lightning 1.0 (#277)
    • Fixed stl10 datamodule (#369)
    • Fixes SimCLR transforms (#329)
    • Fixed binary MNIST datamodule (#377)
    • Fixed the end of batch size mismatch (#389)
    • Fixed batch_size parameter for DataModules remaining (#344)
    • Fixed CIFAR num_samples (#432)
    • Fixed DQN run_n_episodes using the wrong environment variable (#525)

    Contributors

    @akihironitta, @ananyahjha93, @annikabrundyn, @awaelchli, @Borda, @briankosw, @chris-clem, @deng-cy, @hecoding, @miccio-dk, @oke-aditya, @SeanNaren, @sid-sundrani, @teddykoker, @zlapp

    If we forgot someone due to not matching commit email with GitHub account, let us know :]

    Source code(tar.gz)
    Source code(zip)
  • 0.2.3(Oct 12, 2020)

    [0.2.3] - 2020-10-12

    Added

    • Enabled PyTorch Lightning 0.10 compatibility (#264)
    • Added dummy datasets (#266)
    • Added KittiDataModule (#248)
    • Added UNet (#247)
    • Added reinforcement learning models, losses and datamodules (#257)
    Source code(tar.gz)
    Source code(zip)
  • 0.2.1(Sep 13, 2020)

    [0.2.1] - 2020-09-13

    Added

    • Added pretrained VAE with resnet encoders and decoders
    • Added pretrained AE with resnet encoders and decoders
    • Added CPC pretrained on CIFAR10 and STL10
    • Verified BYOL implementation

    Changed

    • Dropped all dependencies except PyTorch Lightning and PyTorch
    • Decoupled datamodules from GAN (#206)
    • Modularize AE & VAE (#196)

    Fixed

    • Fixed gym (#221)
    • Fix L1/L2 regularization (#216)
    • Fix max_depth recursion crash in AsynchronousLoader (#191)
    Source code(tar.gz)
    Source code(zip)
  • 0.1.1(Aug 23, 2020)

    [0.1.1] - 2020-08-23

    Added

    • Added Faster RCNN + Pscal VOC DataModule (#157)
    • Added a better lars scheduling LARSWrapper (#162)
    • Added CPC finetuner (#158)
    • Added BinaryMNISTDataModule (#153)
    • Added learning rate scheduler to BYOL (#148)
    • Added Cityscapes DataModule (#136)
    • Added learning rate scheduler LinearWarmupCosineAnnealingLR (#138)
    • Added BYOL (#144)
    • Added ConfusedLogitCallback (#118)
    • Added an asynchronous single GPU dataloader (#1521)

    Fixed

    • Fixed simclr finetuner (#165)
    • Fixed STL10 finetuner (#164)
    • Fixed Image GPT (#108)
    • Fixed unused MNIST transforms in tran/val/test (#109)

    Changed

    • Enhanced train batch function (#107)
    Source code(tar.gz)
    Source code(zip)
  • 0.1.0(Jan 16, 2021)

    [0.1.0] - 2020-07-02

    Added

    • Added setup and repo structure
    • Added requirements
    • Added docs
    • Added Manifest
    • Added coverage
    • Added MNIST template
    • Added VAE template
    • Added GAN + AE + MNIST
    • Added Linear Regression
    • Added Moco2g
    • Added simclr
    • Added RL module
    • Added Loggers
    • Added Transforms
    • Added Tiny Datasets
    • Added regularization to linear + logistic models
    • Added Linear and Logistic Regression tests
    • Added Image GPT
    • Added Recommenders module

    Changed

    • Device is no longer set in the DQN model init
    • Moved RL loss function to the losses module
    • Moved rl.common.experience to datamodules
    • train_batch function to VPG model to generate batch of data at each step (POC)
    • Experience source no longer gets initialized with a device, instead the device is passed at each step()
    • Refactored ExperienceSource classes to be handle multiple environments.

    Removed

    • Removed N-Step DQN as the latest version of the DQN supports N-Step by setting the n_step arg to n
    • Deprecated common.experience

    Fixed

    • Documentation
    • Doct tests
    • CI pipeline
    • Imports and pkg
    • CPC fixes
    Source code(tar.gz)
    Source code(zip)
PyTorch Implement of Context Encoders: Feature Learning by Inpainting

Context Encoders: Feature Learning by Inpainting This is the Pytorch implement of CVPR 2016 paper on Context Encoders 1) Semantic Inpainting Demo Inst

321 Dec 25, 2022
MoveNet Single Pose on OpenVINO

MoveNet Single Pose tracking on OpenVINO Running Google MoveNet Single Pose models on OpenVINO. A convolutional neural network model that runs on RGB

35 Nov 11, 2022
GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training @ KDD 2020

GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training Original implementation for paper GCC: Graph Contrastive Coding for Graph Neural N

THUDM 274 Dec 27, 2022
Notes taking website build with Docker + Django + React.

Notes website. Try it in browser! / But how to run? Description. This is monorepository with notes website. Website provides web interface for creatin

Kirill Zhosul 2 Jul 27, 2022
An ever-growing playground of notebooks showcasing CLIP's impressive zero-shot capabilities.

Playground for CLIP-like models Demo Colab Link GradCAM Visualization Naive Zero-shot Detection Smarter Zero-shot Detection Captcha Solver Changelog 2

Kevin Zakka 101 Dec 30, 2022
Simple and understandable swin-transformer OCR project

swin-transformer-ocr ocr with swin-transformer Overview Simple and understandable swin-transformer OCR project. The model in this repository heavily r

Ha YongWook 67 Dec 31, 2022
Code for the paper "Reinforced Active Learning for Image Segmentation"

Reinforced Active Learning for Image Segmentation (RALIS) Code for the paper Reinforced Active Learning for Image Segmentation Dependencies python 3.6

Arantxa Casanova 79 Dec 19, 2022
Cross-Document Coreference Resolution

Cross-Document Coreference Resolution This repository contains code and models for end-to-end cross-document coreference resolution, as decribed in ou

Arie Cattan 29 Nov 28, 2022
Spiking Neural Network for Computer Vision using SpikingJelly framework and Pytorch-Lightning

Spiking Neural Network for Computer Vision using SpikingJelly framework and Pytorch-Lightning

Sami BARCHID 2 Oct 20, 2022
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt. This is done by

Mehdi Cherti 135 Dec 30, 2022
The `rtdl` library + The official implementation of the paper

The `rtdl` library + The official implementation of the paper "Revisiting Deep Learning Models for Tabular Data"

Yandex Research 510 Dec 30, 2022
Source code and dataset for ACL2021 paper: "ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive Learning".

ERICA Source code and dataset for ACL2021 paper: "ERICA: Improving Entity and Relation Understanding for Pre-trained Language Models via Contrastive L

THUNLP 75 Nov 02, 2022
SGoLAM - Simultaneous Goal Localization and Mapping

SGoLAM - Simultaneous Goal Localization and Mapping PyTorch implementation of the MultiON runner-up entry, SGoLAM: Simultaneous Goal Localization and

10 Jan 05, 2023
Offical implementation of Shunted Self-Attention via Multi-Scale Token Aggregation

Shunted Transformer This is the offical implementation of Shunted Self-Attention via Multi-Scale Token Aggregation by Sucheng Ren, Daquan Zhou, Shengf

156 Dec 27, 2022
PyTorch implementation of PP-LCNet

PP-LCNet-Pytorch Pre-Trained Models Google Drive p018 Accuracy Models Top1 Top5 PPLCNet_x0_25 0.5186 0.7565 PPLCNet_x0_35 0.5809 0.8083 PPLCNet_x0_5 0

24 Dec 12, 2022
Code for MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks

MentorNet: Learning Data-Driven Curriculum for Very Deep Neural Networks This is the code for the paper: MentorNet: Learning Data-Driven Curriculum fo

Google 302 Dec 23, 2022
The challenge for Quantum Coalition Hackathon 2021

Qchack 2021 Google Challenge This is a challenge for the brave 2021 qchack.io participants. Instructions Hello, intrepid qchacker, welcome to the G|o

quantumlib 18 May 04, 2022
Human Dynamics from Monocular Video with Dynamic Camera Movements

Human Dynamics from Monocular Video with Dynamic Camera Movements Ri Yu, Hwangpil Park and Jehee Lee Seoul National University ACM Transactions on Gra

215 Jan 01, 2023
NVIDIA Merlin is an open source library providing end-to-end GPU-accelerated recommender systems, from feature engineering and preprocessing to training deep learning models and running inference in production.

NVIDIA Merlin NVIDIA Merlin is an open source library designed to accelerate recommender systems on NVIDIA’s GPUs. It enables data scientists, machine

419 Jan 03, 2023
Noether Networks: meta-learning useful conserved quantities

Noether Networks: meta-learning useful conserved quantities This repository contains the code necessary to reproduce experiments from "Noether Network

Dylan Doblar 33 Nov 23, 2022