A python library for self-supervised learning on images.

Overview

Lightly Logo

GitHub Unit Tests codecov

Lightly is a computer vision framework for self-supervised learning.

We, at Lightly, are passionate engineers who want to make deep learning more efficient. We want to help popularize the use of self-supervised methods to understand and filter raw image data. Our solution can be applied before any data annotation step and the learned representations can be used to analyze and visualize datasets as well as for selecting a core set of samples.

Tutorials

Want to jump to the tutorials and see lightly in action?

Benchmarks

Currently implemented models and their accuracy on cifar10. All models have been evaluated using kNN. We report the max test accuracy over the epochs as well as the maximum GPU memory consumption. All models in this benchmark use the same augmentations as well as the same ResNet-18 backbone. Training precision is set to FP32 and SGD is used as an optimizer with cosineLR.

Model Epochs Batch Size Test Accuracy Peak GPU usage
MoCo 200 128 0.83 2.1 GBytes
SimCLR 200 128 0.78 2.0 GBytes
SimSiam 200 128 0.73 3.0 GBytes
MoCo 200 512 0.85 7.4 GBytes
SimCLR 200 512 0.83 7.8 GBytes
SimSiam 200 512 0.81 7.0 GBytes
MoCo 800 512 0.90 7.2 GBytes
SimCLR 800 512 0.89 7.7 GBytes
SimSiam 800 512 0.91 6.9 GBytes

Terminology

Below you can see a schematic overview of the different concepts present in the lightly Python package. The terms in bold are explained in more detail in our documentation.

Overview of the lightly pip package

Quick Start

Lightly requires Python 3.6+. We recommend installing Lightly in a Linux or OSX environment.

Requirements

  • hydra-core>=1.0.0
  • numpy>=1.18.1
  • pytorch_lightning>=0.10.0
  • requests>=2.23.0
  • torchvision
  • tqdm

Installation

You can install Lightly and its dependencies from PyPI with:

pip3 install lightly

We strongly recommend that you install Lightly in a dedicated virtualenv, to avoid conflicting with your system packages.

Command-Line Interface

Lightly is accessible also through a command-line interface (CLI). To train a SimCLR model on a folder of images you can simply run the following command:

lightly-train input_dir=/mydataset

To create an embedding of a dataset you can use:

lightly-embed input_dir=/mydataset checkpoint=/mycheckpoint

The embeddings with the corresponding filename are stored in a human-readable .csv file.

Next Steps

Head to the documentation and see the things you can achieve with Lightly!

Development

To install dev dependencies (for example to contribute to the framework) you can use the following command:

pip3 install -e ".[dev]"

For more information about how to contribute have a look here.

Running Tests

Unit tests are within the tests folder and we recommend to run them using pytest. There are two test configurations available. By default only a subset will be run. This is faster and should take less than a minute. You can run it using

python -m pytest -s -v

To run all tests (including the slow ones) you can use the following command.

python -m pytest -s -v --runslow

Code Linting

We provide a Pylint config following the Google Python Style Guide.

You can run the linter from your terminal either on a folder

pylint lightly/

or on a specific file

pylint lightly/core.py

Further Reading

Self-supervised Learning:

FAQ

  • Why should I care about self-supervised learning? Aren't pre-trained models from ImageNet much better for transfer learning?

    • Self-supervised learning has become increasingly popular among scientists over the last year because the learned representations perform extraordinarily well on downstream tasks. This means that they capture the important information in an image better than other types of pre-trained models. By training a self-supervised model on your dataset, you can make sure that the representations have all the necessary information about your images.
  • How can I contribute?

    • Create an issue if you encounter bugs or have ideas for features we should implement. You can also add your own code by forking this repository and creating a PR. More details about how to contribute with code is in our contribution guide.
  • Is this framework for free?

    • Yes, this framework completely free to use and we provide the code. We believe that we need to make training deep learning models more data efficient to achieve widespread adoption. One step to achieve this goal is by leveraging self-supervised learning. The company behind lightly commited to keep this framework open-source.
  • If this framework is free, how is the company behind lightly making money?

    • Training self-supervised models is only part of the solution. The company behind lightly focuses on processing and analyzing embeddings created by self-supervised models. By building, what we call a self-supervised active learning loop we help companies understand and work with their data more efficiently. This framework acts as an interface for our platform to easily upload and download datasets, embeddings and models. Whereas the platform will cost for additional features this frameworks will always remain free of charge (even for commercial use).
Comments
  • Add Masked Autoencoder implementation

    Add Masked Autoencoder implementation

    The paper Masked Autoencoders Are Scalable Vision Learners https://arxiv.org/abs/2111.06377 is suggesting that a masked auto-encoder (similar to pre-training on NLP) works very well as a pretext task for self-supervised learning. Let's add it to Lightly.

    image

    type: idea 
    opened by IgorSusmelj 19
  • Feature proposals

    Feature proposals

    Hi; first of all thanks for making this library; it looks very useful.

    There are two papers that id like to try and implement in your repo:

    Understanding Contrastive Representation Learning through Alignment and Uniformity on the Hypersphere

    This paper introduces a contrastive loss function with some quite different properties from the more cited ones. Out of all loss functions ive tried for my projects, this is the one I had most success with; and also the one that a-priori seems the most sensible to me, of whats currently been published. The paper provides source and its a trivially implemented additional loss function really. Ive got experience implementing it, plus some generalizations I came up with I could provide. Should be an easy addition to this package. My motivation for contribution is a selfish one, as having it included here in the benchmarks would help me better understand the relative strengths and weaknesses on different types of datasets.

    Scaling Deep Contrastive Learning Batch Size with Almost Constant Peak Memory Usage

    I also recently came across this paper. It also provides (rather ugly imo) torch code. Implementing it in this package first would be easier to me than implementing it in my private repos; since itd be easier to properly test the implementation it using the infrastructure provided here. The title is quite descriptive; given the importance of batch-size for within-batch mining techniques, and the fact that many of us are working with single gpus, being able to scale batch sizes arbitrarily is super useful, and I think this paper has the right idea of how to go about it.

    The contribution guide mentions to first discuss such additions; so tell me what you think!

    type: idea 
    opened by EelcoHoogendoorn 14
  • Adding DINO to lightly

    Adding DINO to lightly

    First of all Kudos on creating this amazing library ❤️.

    I think all of us in self supervised learning community have heard of DINO. For the past couple of weeks, I have been trying to port the DINO implementation in facebook's implementation to PL. I have implemented it atleast for my use case which is a kaggle competition. I initially looked at lightly for implmenetation but I did not see any, so I borrowed and adopted code from original implementation and converted it into lightning.

    Honestly it was a tedious task. I was wondering if you guys would be interested in adding DINO to lightly.

    Here's how I think we should structure the implementations

    1. Augmentations : Dino heavily relies on multi-crop strategy. Since lightly already has a collate_fn for implementing augmentations, dino augmentations can be implemented in the same way.
    2. Model forward pass and Model Heads : The forward pass for the model is weird since we have to deal with multicrop and global crops, so this needs to be implemented as a nn.Module like other heads in lightly.
    3. Loss Functions : I am not sure how this should be implemented for lightly, although FB have a custom class for that.
    4. Utility functions : FB has used some tricks for stable training and results, so these need to be included as well.

    I have used lightning to implement all this and so far at least for my use case I was able to train vit-base-16 due my hardware constraints.

    This goes without saying I would personally like to work on PR :heart:

    Please let me know.

    opened by Atharva-Phatak 11
  • Queue for SwaV

    Queue for SwaV

    Aims to solve https://github.com/lightly-ai/lightly/issues/1006

    Implements queue for SwaV with minimal code change in the training loop. Implemented as per details from the paper:

    Rationale

    image

    A trick for small batch sizes - start using queue prototypes at a later epoch

    image
    opened by ibro45 10
  • Malte lig 1262 advanced selection external docs

    Malte lig 1262 advanced selection external docs

    Description

    • adds docs for selection using the new config
    • adapts existing docs (getting started, docker active learning)
    • already included: DIVERSIFY->DIVERSITY

    Tabs vs. Dropdowns

    Here tabs are used for the SelectionStrategy and Dropdowns for the configuration examples. image

    opened by MalteEbner 8
  • MSN: Masked Siamese Networks for Label-Efficient Learning

    MSN: Masked Siamese Networks for Label-Efficient Learning

    Hello, I would like to share this new interesting SSL method proposed by Meta that achieves SOTA results and shows interesting generalization properties.

    It is based on ViT and I think it could be worth to integrate it in this codebase. Also, the official implementation is available :)

    MSN Paper, MSN GIthub code

    opened by masc-it 8
  • recipe for using the Lightly Docker as API worker

    recipe for using the Lightly Docker as API worker

    opened by MalteEbner 8
  • Guarin lig 364 add download urls function to lightlyapi

    Guarin lig 364 add download urls function to lightlyapi

    Draft for downloading samples with readurl. I included an iterable dataset for completeness.

    The following code takes 20s to load 1k samples:

    import torch
    from tqdm import tqdm
    
    from lightly.data.iterable_dataset import ImageIterableDataset
    from lightly.api import ApiWorkflowClient
    
    client = ApiWorkflowClient(
        token="token",
        dataset_id="dataset id"
    )
    
    samples = client.download_raw_samples(client.dataset_id)
    dataset = ImageIterableDataset(samples)
    dataloader = torch.utils.data.DataLoader(
        dataset,
        num_workers=8,
        batch_size=8
    )
    
    for image, filename, frame_idx in tqdm(dataloader):
        pass
    

    The same code for videos runs in 10s for 5 videos with 300 frames each using only 2 workers.

    opened by guarin 8
  • enhance the upload speed

    enhance the upload speed

    using lightly-uploadand lightly-magic is not very fast and does not utilise all the cores. There seems to be a bottleneck somewhere.

    Tasks

    • [ ] figure out where most of the time goes when processing
      • [ ] split apart numbers of images/s to images/s processing and images/s uploading
    • [ ] ensure when num_workers is set, we really process with the amount of workers
    type: enhancement 
    opened by japrescott 8
  • [Active Learning] Compute Active learning scores for object detection

    [Active Learning] Compute Active learning scores for object detection

    Create a scorer for object detection that can compute scores like prediction entropy, ...

    Input of predictions: For each of the N samples, there are B bounding boxes giving the probability of one of the C classes.

    • [x] Find a good way to represent the model predictions of an object detection model
    • [x] Decide which kind of models to support (Yolo, SSD, ...), see also https://heartbeat.fritz.ai/a-2019-guide-to-object-detection-9509987954c3
    • [x] Decide which kinds of scores to compute and find a good way to compute them, e.g. see https://heartbeat.fritz.ai/a-2019-guide-to-object-detection-9509987954c3 or https://arxiv.org/pdf/2004.04699.pdf
    • [x] Implement it
    type: enhancement 
    opened by MalteEbner 8
  • NNMemoryBank not working with DataParallel

    NNMemoryBank not working with DataParallel

    I have been using the NNMemoryBank as a component in my module and noticed that at each forward pass NNMemoryBank.bank is equal to None. This only occurs when my module is wrapped in DataParallel. As a result, throughout training my NN pairs are always random noise (surprisingly, this only hurt the contrastive learning performance by a few percentage point on linear probing??).

    Here is a simple test case that highlights the issue:

    import torch
    from lightly.models.modules import NNMemoryBankModule
    memory_bank = NNMemoryBankModule(size=1000)
    print(memory_bank.bank)
    memory_bank(torch.randn((100, 512)))
    print(memory_bank.bank)
    
    memory_bank = NNMemoryBankModule(size=1000)
    memory_bank = torch.nn.DataParallel(memory_bank, device_ids=[0,1])
    print(memory_bank.module.bank)
    memory_bank(torch.randn((100, 512)))
    print(memory_bank.module.bank)
    

    The output of the first is None and a torch.Tensor, as expected. The output for the second is None for both.

    opened by kfallah 7
  • Malte lig 2288 download corrupted files as artifact pip

    Malte lig 2288 download corrupted files as artifact pip

    Description

    • Generated new openapi code
    • Adds the download_compute_worker_run_corruptness_check_information method to download the new corruptness check artifacts

    How was it tested?

    Manually running the example code.

    Why no unittests?

    The other artifact download functions are not tested either, because the code is really tiny. Thus I kept it consistent.

    ONLY MERGE AFTER https://github.com/lightly-ai/lightly-core/pull/2212

    opened by MalteEbner 1
  • MemoryBankModule - no distributed all gather prior to queueing a batch

    MemoryBankModule - no distributed all gather prior to queueing a batch

    If I'm not mistaken, the batch is not all_gathered before the queue is updated. I was just comparing the code against the one it was based on (MoCo's memory bank) and noticed this: https://github.com/facebookresearch/moco/blob/78b69cafae80bc74cd1a89ac3fb365dc20d157d3/moco/builder.py#L55

    Was this done on purpose?

    Is it because different processes would use the same queued batches, unlike the DistributedDataLoader batches which are indexed in DDP way?

    opened by ibro45 0
  • how can I reconstruct images (MAE)

    how can I reconstruct images (MAE)

    Hi and thank you for writing this great library.

    I'm trying to figure out if my MAE training worked beyond looking at mse numbers. I input an image of shape 256x256 (rgb) and then I get predictions of shape bx38x3072 I'm using the default torchvision.models.vit_b_32

    I am not sure how to reshape this back into an image? it also just doesn't make sense based on the numbers - there should be 64 patches of 3072?

    opened by DanTaranis 7
  • SSL Online Evaluator Callback with Custom Data (i.e., different than pre-training data)

    SSL Online Evaluator Callback with Custom Data (i.e., different than pre-training data)

    Hi,

    Is it possible to use online evaluation of SSL encoder with a linear classifier to run on different data then the one being used for pre-training?

    opened by aqibsaeed 2
Releases(v1.2.40)
Owner
Lightly
Lightly
AgML is a comprehensive library for agricultural machine learning

AgML is a comprehensive library for agricultural machine learning. Currently, AgML provides access to a wealth of public agricultural datasets for common agricultural deep learning tasks.

Plant AI and Biophysics Lab 1 Jul 07, 2022
A free, multiplatform SDK for real-time facial motion capture using blendshapes, and rigid head pose in 3D space from any RGB camera, photo, or video.

mocap4face by Facemoji mocap4face by Facemoji is a free, multiplatform SDK for real-time facial motion capture based on Facial Action Coding System or

Facemoji 591 Dec 27, 2022
[ACMMM 2021 Oral] Enhanced Invertible Encoding for Learned Image Compression

InvCompress Official Pytorch Implementation for "Enhanced Invertible Encoding for Learned Image Compression", ACMMM 2021 (Oral) Figure: Our framework

96 Nov 30, 2022
FLVIS: Feedback Loop Based Visual Initial SLAM

FLVIS Feedback Loop Based Visual Inertial SLAM 1-Video EuRoC DataSet MH_05 Handheld Test in Lab FlVIS on UAV Platform 2-Relevent Publication: Under Re

UAV Lab - HKPolyU 182 Dec 04, 2022
2nd solution of ICDAR 2021 Competition on Scientific Literature Parsing, Task B.

TableMASTER-mmocr Contents About The Project Method Description Dependency Getting Started Prerequisites Installation Usage Data preprocess Train Infe

Jianquan Ye 298 Dec 21, 2022
Exploration of some patients clinical variables.

Answer_ALS_clinical_data Exploration of some patients clinical variables. All the clinical / metadata data is available here: https://data.answerals.o

1 Jan 20, 2022
Library for machine learning stacking generalization.

stacked_generalization Implemented machine learning *stacking technic[1]* as handy library in Python. Feature weighted linear stacking is also availab

114 Jul 19, 2022
Example for AUAV 2022 with obstacle avoidance.

AUAV 2022 Sample This is a sample PX4 based quadrotor path planning framework based on Ubuntu 20.04 and ROS noetic for the IEEE Autonomous UAS 2022 co

James Goppert 11 Sep 16, 2022
[ICCV 2021] Our work presents a novel neural rendering approach that can efficiently reconstruct geometric and neural radiance fields for view synthesis.

MVSNeRF Project page | Paper This repository contains a pytorch lightning implementation for the ICCV 2021 paper: MVSNeRF: Fast Generalizable Radiance

Anpei Chen 529 Dec 30, 2022
Code repository for our paper regarding the L3D dataset.

The Large Labelled Logo Dataset (L3D): A Multipurpose and Hand-Labelled Continuously Growing Dataset Website: https://lhf-labs.github.io/tm-dataset Da

LHF Labs 9 Dec 14, 2022
Code for sound field predictions in domains with impedance boundaries. Used for generating results from the paper

Code for sound field predictions in domains with impedance boundaries. Used for generating results from the paper

DTU Acoustic Technology Group 11 Dec 17, 2022
Code for CoMatch: Semi-supervised Learning with Contrastive Graph Regularization

CoMatch: Semi-supervised Learning with Contrastive Graph Regularization (Salesforce Research) This is a PyTorch implementation of the CoMatch paper [B

Salesforce 107 Dec 14, 2022
Digital Twin Mobility Profiling: A Spatio-Temporal Graph Learning Approach

Digital Twin Mobility Profiling: A Spatio-Temporal Graph Learning Approach This is the implementation of traffic prediction code in DTMP based on PyTo

chenxin 1 Dec 19, 2021
Language models are open knowledge graphs ( non official implementation )

language-models-are-knowledge-graphs-pytorch Language models are open knowledge graphs ( work in progress ) A non official reimplementation of Languag

theblackcat102 132 Dec 18, 2022
A PyTorch-based Semi-Supervised Learning (SSL) Codebase for Pixel-wise (Pixel) Vision Tasks

PixelSSL is a PyTorch-based semi-supervised learning (SSL) codebase for pixel-wise (Pixel) vision tasks. The purpose of this project is to promote the

Zhanghan Ke 255 Dec 11, 2022
Deep Learning Based Fasion Recommendation System for Ecommerce

Project Name: Fasion Recommendation System for Ecommerce A Deep learning based streamlit web app which can recommened you various types of fasion prod

BAPPY AHMED 13 Dec 13, 2022
Dilated Convolution for Semantic Image Segmentation

Multi-Scale Context Aggregation by Dilated Convolutions Introduction Properties of dilated convolution are discussed in our ICLR 2016 conference paper

Fisher Yu 764 Dec 26, 2022
Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World

Legged Robots that Keep on Learning Official codebase for Legged Robots that Keep on Learning: Fine-Tuning Locomotion Policies in the Real World, whic

Laura Smith 70 Dec 07, 2022
Img-process-manual - Utilize Python Numpy and Matplotlib to realize OpenCV baisc image processing function

Img-process-manual - Opencv Library basic graphic processing algorithm coding reproduction based on Numpy and Matplotlib library

Jack_Shaw 2 Dec 12, 2022
Totally Versatile Miscellanea for Pytorch

Totally Versatile Miscellania for PyTorch Thomas Viehmann [email protected] Thi

Thomas Viehmann 428 Dec 28, 2022