PyTorch Lightning Optical Flow models, scripts, and pretrained weights.

Overview

PyTorch Lightning Optical Flow

GitHub CI flake8 status GitHub CI pytest status GitHub CI pytest pip status DOI

Introduction

This is a collection of state-of-the-art deep model for estimating optical flow. The main goal is to provide a unified framework where multiple models can be trained and tested more easily.

The work and code from many others are present here. I tried to make sure everything is properly referenced, but please let me know if I missed something.

This is still under development, so some things may not work as intended. I plan to add more models in the future, as well keep improving the platform.

Available models

Read more details about the models on https://ptlflow.readthedocs.io/en/latest/models/models_list.html.

Results

You can see a table with main evaluation results of the available models here. More results are also available in the folder docs/source/results.

Disclaimer: These results are the ones obtained by evaluating the available models in this framework in my machine. Your results may be different due to differences in hardware and software. I also do not guarantee that the results of each model will be similar to the ones presented in the respective papers or other original sources. If you need to replicate the original results from a paper, you should use the original implementations.

Getting started

Please take a look at the documentation to learn how to install and use PTLFlow.

You can also check the notebooks below running on Google Colab for some practical examples:

Licenses

The original code of this repository is licensed under the Apache 2.0 license.

Each model may be subjected to different licenses. The license of each model is included in their respective folders. It is your responsibility to make sure that your project is in compliance with all the licenses and conditions involved.

The external pretrained weights all have different licenses, which are listed in their respective folders.

The pretrained weights that were trained within this project are available under the CC BY-NC-SA 4.0 license, which I believe that covers the licenses of the datasets used in the training. That being said, I am not a legal expert so if you plan to use them to any purpose other than research, you should check all the involved licenses by yourself. Additionally, the datasets used for the training usually require the user to cite the original papers, so be sure to include their respective references in your work.

Contributing

Contribution are welcome! Please check CONTRIBUTING.md to see how to contribute.

Citing

BibTeX

@misc{morimitsu2021ptlflow,
  author = {Henrique Morimitsu},
  title = {PyTorch Lightning Optical Flow},
  year = {2021},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/hmorimitsu/ptlflow}}
}

Acknowledgements

  • This README file is heavily inspired by the one from the timm repository.
  • Some parts of the code were inspired by or taken from FlowNetPytorch.
  • flownet2-pytorch was also another important source.
  • The current main training routine is based on RAFT.
Comments
  • Inference on a whole video (or batch of image pairs) efficiently?

    Inference on a whole video (or batch of image pairs) efficiently?

    I would like to process a whole video efficiently - ie not calling model.forward() once for every pair of images, and instead batching things together. But I can't quite figure out how to do that with IOAdapter (which I would like to use, to ensure eg I use the correct padding). Is this possible? I tried formatting my video into a batch of image pairs, of shape (batch, 2, h, w, c), but this didn't seem to be supported by IOAdapter.

    opened by zplizzi 4
  • AttributeError: 'IOAdapter' object has no attribute 'unpad'

    AttributeError: 'IOAdapter' object has no attribute 'unpad'

    Following the colab example, the line:

    # Some padding may have been added during prepare_inputs. The line below ensures that the padding is removed
    # to make the predictions have the same size as the original images.
    predictions = io_adapter.unpad(predictions)
    

    gives the error

    AttributeError: 'IOAdapter' object has no attribute 'unpad'

    A simple fix seems to be to replace the line with

    predictions = io_adapter.unpad_and_unscale(predictions)

    which makes the example run. Is this correct?

    opened by duckduck-sys 4
  • question about running from source code

    question about running from source code

    Hi, I want to make some modifications to this repo. However, after cloning the repo and running train.py, it says No module named 'pytorch_lightning'. I guess the command pip install dist/ptlflow-*.whl might help. But it download a torch with version>=1.7, which my GPU does not support. Is there any way that I can bypass this error?

    opened by btwbtm 3
  • train on new data

    train on new data

    Hi!

    I'm trying to train the model on my own training data but I get the following error:

    !python train.py raft_small \
      --gpus 1 \
      --train_dataset overfit-sintel \
      --pretrained_ckpt things \
      --val_dataset none \
      --train_batch_size 1 \
      --train_crop_size 512 128 \
      --max_epochs 100 \
      --lr 1e-3
    
    /usr/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject
      return f(*args, **kwds)
    /usr/lib/python3.7/importlib/_bootstrap.py:219: RuntimeWarning: numpy.ufunc size changed, may indicate binary incompatibility. Expected 192 from C header, got 216 from PyObject
      return f(*args, **kwds)
    ERROR: torch_scatter not found. CSV requires torch_scatter library to run. Check instructions at: https://github.com/rusty1s/pytorch_scatter
    Global seed set to 1234
    Downloading: "https://github.com/hmorimitsu/ptlflow/releases/download/weights1/raft_small-things-b7d9f997.ckpt" to /root/.cache/torch/hub/ptlflow/checkpoints/raft_small-things-b7d9f997.ckpt
    100% 3.81M/3.81M [00:00<00:00, 26.3MB/s]
    GPU available: True, used: True
    TPU available: False, using: 0 TPU cores
    IPU available: False, using: 0 IPUs
    LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0]
    Traceback (most recent call last):
      File "train.py", line 151, in <module>
        train(args)
      File "train.py", line 111, in train
        trainer.fit(model)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 741, in fit
        self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 685, in _call_and_handle_interrupt
        return trainer_fn(*args, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 777, in _fit_impl
        self._run(model, ckpt_path=ckpt_path)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1145, in _run
        self.accelerator.setup(self)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/accelerators/gpu.py", line 46, in setup
        return super().setup(trainer)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/accelerators/accelerator.py", line 93, in setup
        self.setup_optimizers(trainer)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/accelerators/accelerator.py", line 352, in setup_optimizers
        trainer=trainer, model=self.lightning_module
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/plugins/training_type/training_type_plugin.py", line 245, in init_optimizers
        return trainer.init_optimizers(model)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/optimizers.py", line 35, in init_optimizers
        optim_conf = self.call_hook("configure_optimizers", pl_module=pl_module)
      File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1501, in call_hook
        output = model_fx(*args, **kwargs)
      File "/usr/local/lib/python3.7/dist-packages/ptlflow/models/base_model/base_model.py", line 339, in configure_optimizers
        self.train_dataloader()  # Just to initialize dataloader variables
      File "/usr/local/lib/python3.7/dist-packages/ptlflow/models/base_model/base_model.py", line 405, in train_dataloader
        dataset = getattr(self, f'_get_{dataset_name}_dataset')(True, *parsed_vals[2:])
      File "/usr/local/lib/python3.7/dist-packages/ptlflow/models/base_model/base_model.py", line 904, in _get_overfit_dataset
        get_occlusion_mask=False)
      File "/usr/local/lib/python3.7/dist-packages/ptlflow/data/datasets.py", line 1025, in __init__
        f'{passd}, {seq_name}: {len(image_paths)-1} vs {len(flow_paths)}')
    AssertionError: clean, .ipynb_checkpoints: -1 vs 0
    

    I prepared the data as in the example:

    Screenshot 2022-01-19 at 18 06 54

    For the inference, everything works well but not for the training,

    Thank you!

    opened by esgomezm 2
  • error in train with ptlflow_demo_train.ipynb

    error in train with ptlflow_demo_train.ipynb

    Hi, When I run this code in Colab. I get the following error. Please advise. I did not change anything.

    !python train.py raft_small \ --gpus 1 \ --train_dataset overfit-sintel \ --val_dataset none \ --train_batch_size 1 \ --max_epochs 100 \ --lr 1e-3

    ERROR: torch_scatter not found. CSV requires torch_scatter library to run. Check instructions at: https://github.com/rusty1s/pytorch_scatter Global seed set to 1234 GPU available: True, used: True TPU available: False, using: 0 TPU cores IPU available: False, using: 0 IPUs LOCAL_RANK: 0 - CUDA_VISIBLE_DEVICES: [0] 01/02/2022 08:49:01 - WARNING: --train_crop_size is not set. It will be set as (432, 1024). 01/02/2022 08:49:01 - INFO: Loading 1 samples from Sintel_clean dataset. Traceback (most recent call last): File "train.py", line 152, in train(args) File "train.py", line 111, in train trainer.fit(model) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 732, in fit self._fit_impl, model, train_dataloaders, val_dataloaders, datamodule, ckpt_path File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 682, in _call_and_handle_interrupt return trainer_fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 768, in _fit_impl results = self._run(model, ckpt_path=ckpt_path) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1155, in _run self.strategy.setup(self) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/strategies/single_device.py", line 76, in setup super().setup(trainer) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/strategies/strategy.py", line 118, in setup self.setup_optimizers(trainer) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/strategies/strategy.py", line 108, in setup_optimizers self.lightning_module File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/core/optimizer.py", line 174, in _init_optimizers_and_lr_schedulers optim_conf = model.trainer._call_lightning_module_hook("configure_optimizers", pl_module=model) File "/usr/local/lib/python3.7/dist-packages/pytorch_lightning/trainer/trainer.py", line 1535, in _call_lightning_module_hook output = fn(*args, **kwargs) File "/usr/local/lib/python3.7/dist-packages/ptlflow/models/base_model/base_model.py", line 358, in configure_optimizers optimizer, self.args.lr, total_steps=self.args.max_steps, pct_start=0.05, cycle_momentum=False, anneal_strategy='linear') File "/usr/local/lib/python3.7/dist-packages/torch/optim/lr_scheduler.py", line 1452, in init raise ValueError("Expected positive integer total_steps, but got {}".format(total_steps)) ValueError: Expected positive integer total_steps, but got -1

    opened by Cyrus1993 2
  • results of raft on sintel after training on kitti

    results of raft on sintel after training on kitti

    Hi Henrique, thanks for this great repo! I have a question about the evaluation results of raft on sintel. The results on sintel are 6.293/4.687 after training on kitti. On the RAFT paper the authors reported 1.61/2.86 (Table 1). I understand the model was fine-tuned on kitti only after training on sintel. Therefore the model performance on sintel drops. I also obtained similar results after this training procedure.

    In order to achieve the results reported in the RAFT paper, seems we should mix the sintel data with KITTI. ("When evaluating on the Sintel(test) set, we finetune on the combined clean and final passes of the training set along with KITTI and HD1K data. ") Do you have plan to do so? Thanks.

    opened by askerlee 2
  • FastFlowNet - Hard and un-reproducible convergence

    FastFlowNet - Hard and un-reproducible convergence

    Hi Henrique,

    Thank you so much for sharing this collection of optical models. It helps me get on with these models quickly.

    I've been training FastFlowNet on FlyingChairs dataset with your default configurations. I found that the convergence is hard and usually un-reproducible. Some times the training will converge after 16 epochs (45k steps). Sometimes the training will converge after 47 epochs (130k steps). Sometimes it will just not converge.

    I'm attaching the loss curve for convergence starting with 16 epochs and 47 epochs for example.

    convergence starting with 16 epochs 16 epochs convergence

    convergence starting with 47 epochs 47 epochs convergences

    Did you see this phenomenon when you were training the model?

    Besides, I compared your loss calculation with FastFlowNet's and PWCNet's original paper. In both papers, the loss for each pyramid level was multiplied with a weight sequence.

    self._weights = [0.005, 0.01, 0.02, 0.08, 0.32]
    

    with 0.005 multiplied to the loss of the highest resolution pyramid level (which is level of 1/4 original image resolution) and 0.32 multiplied to the loss of the lowest resolution pyramid level (which is the level of 1/64 original resolution).

    In your implementation, you reverse the weight sequence and replace values with proportional sequence values. ie.

    self._weights = [0.32, 0.16, 0.08, 0.04, 0.02]
    

    So in your implementation 0.32 applies to the highest resolution pyramid level.

    Do you have any reasons for making this change? Is it due to the original weight sequence is even harder to converge?

    Really appreciate if you can advise.

    Best Regards! David

    opened by magsail 1
Releases(v0.2.7)
Owner
Henrique Morimitsu
Henrique Morimitsu
A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.

A lightweight wrapper for PyTorch that provides a simple declarative API for context switching between devices, distributed modes, mixed-precision, and PyTorch extensions.

Fidelity Investments 56 Sep 13, 2022
PyTorch Implementation of [1611.06440] Pruning Convolutional Neural Networks for Resource Efficient Inference

PyTorch implementation of [1611.06440 Pruning Convolutional Neural Networks for Resource Efficient Inference] This demonstrates pruning a VGG16 based

Jacob Gildenblat 836 Dec 26, 2022
270 Dec 24, 2022
The easiest way to use deep metric learning in your application. Modular, flexible, and extensible. Written in PyTorch.

News March 3: v0.9.97 has various bug fixes and improvements: Bug fixes for NTXentLoss Efficiency improvement for AccuracyCalculator, by using torch i

Kevin Musgrave 5k Jan 02, 2023
Differentiable SDE solvers with GPU support and efficient sensitivity analysis.

PyTorch Implementation of Differentiable SDE Solvers This library provides stochastic differential equation (SDE) solvers with GPU support and efficie

Google Research 1.2k Jan 04, 2023
PyTorch Extension Library of Optimized Scatter Operations

PyTorch Scatter Documentation This package consists of a small extension library of highly optimized sparse update (scatter and segment) operations fo

Matthias Fey 1.2k Jan 07, 2023
Kaldi-compatible feature extraction with PyTorch, supporting CUDA, batch processing, chunk processing, and autograd

Kaldi-compatible feature extraction with PyTorch, supporting CUDA, batch processing, chunk processing, and autograd

Fangjun Kuang 119 Jan 03, 2023
Reformer, the efficient Transformer, in Pytorch

Reformer, the Efficient Transformer, in Pytorch This is a Pytorch implementation of Reformer https://openreview.net/pdf?id=rkgNKkHtvB It includes LSH

Phil Wang 1.8k Jan 06, 2023
TorchShard is a lightweight engine for slicing a PyTorch tensor into parallel shards

TorchShard is a lightweight engine for slicing a PyTorch tensor into parallel shards. It can reduce GPU memory and scale up the training when the model has massive linear layers (e.g., ViT, BERT and

Kaiyu Yue 275 Nov 22, 2022
PyTorch Lightning Optical Flow models, scripts, and pretrained weights.

PyTorch Lightning Optical Flow models, scripts, and pretrained weights.

Henrique Morimitsu 105 Dec 16, 2022
Pytorch implementation of Distributed Proximal Policy Optimization

Pytorch-DPPO Pytorch implementation of Distributed Proximal Policy Optimization: https://arxiv.org/abs/1707.02286 Using PPO with clip loss (from https

Alexis David Jacq 164 Jan 05, 2023
PyTorch implementations of normalizing flow and its variants.

PyTorch implementations of normalizing flow and its variants.

Tatsuya Yatagawa 55 Dec 01, 2022
A PyTorch implementation of EfficientNet

EfficientNet PyTorch Quickstart Install with pip install efficientnet_pytorch and load a pretrained EfficientNet with: from efficientnet_pytorch impor

Luke Melas-Kyriazi 7.2k Jan 06, 2023
Learning Sparse Neural Networks through L0 regularization

Example implementation of the L0 regularization method described at Learning Sparse Neural Networks through L0 regularization, Christos Louizos, Max W

AMLAB 202 Nov 10, 2022
Tacotron 2 - PyTorch implementation with faster-than-realtime inference

Tacotron 2 (without wavenet) PyTorch implementation of Natural TTS Synthesis By Conditioning Wavenet On Mel Spectrogram Predictions. This implementati

NVIDIA Corporation 4.1k Jan 03, 2023
PyTorch Extension Library of Optimized Autograd Sparse Matrix Operations

PyTorch Sparse This package consists of a small extension library of optimized sparse matrix operations with autograd support. This package currently

Matthias Fey 757 Jan 04, 2023
PyTorch wrappers for using your model in audacity!

PyTorch wrappers for using your model in audacity!

130 Dec 14, 2022
An optimizer that trains as fast as Adam and as good as SGD.

AdaBound An optimizer that trains as fast as Adam and as good as SGD, for developing state-of-the-art deep learning models on a wide variety of popula

LoLo 2.9k Dec 27, 2022
Fast, general, and tested differentiable structured prediction in PyTorch

Torch-Struct: Structured Prediction Library A library of tested, GPU implementations of core structured prediction algorithms for deep learning applic

HNLP 1.1k Jan 07, 2023
PyTorch toolkit for biomedical imaging

farabio is a minimal PyTorch toolkit for out-of-the-box deep learning support in biomedical imaging. For further information, see Wikis and Docs.

San Askaruly 47 Dec 28, 2022