DeFMO: Deblurring and Shape Recovery of Fast Moving Objects (CVPR 2021)

Overview

Evaluation, Training, Demo, and Inference of DeFMO

DeFMO: Deblurring and Shape Recovery of Fast Moving Objects (CVPR 2021)

Denys Rozumnyi, Martin R. Oswald, Vittorio Ferrari, Jiri Matas, Marc Pollefeys

Qualitative results: https://www.youtube.com/watch?v=pmAynZvaaQ4

Pre-trained models

The pre-trained DeFMO model as reported in the paper is available here: https://polybox.ethz.ch/index.php/s/M06QR8jHog9GAcF. Put them into ./saved_models sub-folder.

Inference

For generating video temporal super-resolution:

python run.py --video example/falling_pen.avi

For generating temporal super-resolution of a single frame with the given background:

python run.py --im example/im.png --bgr example/bgr.png

Evaluation

After downloading the pre-trained models and downloading the evaluation datasets, you can run

python eval_dataset.py

Synthetic dataset generation

For the dataset generation, please download:

Then, insert your paths in renderer/settings.py file. To generate the dataset, run in renderer sub-folder:

python run_render.py

Note that the full training dataset with 50 object categories, 1000 objects per category, and 24 timestamps takes up to 1 TB of storage memory. Due to this and also the ShapeNet licence, we cannot make the pre-generated dataset public - please generate it by yourself using the steps above.

Training

Set up all paths in main_settings.py and run

python train.py

Evaluation on real-world datasets

All evaluation datasets can be found at http://cmp.felk.cvut.cz/fmo/. We provide a download_datasets.sh script to download the Falling Objects, the TbD-3D, and the TbD datasets.

Reference

If you use this repository, please cite the following publication ( https://arxiv.org/abs/2012.00595 ):

@inproceedings{defmo,
  author = {Denys Rozumnyi and Martin R. Oswald and Vittorio Ferrari and Jiri Matas and Marc Pollefeys},
  title = {DeFMO: Deblurring and Shape Recovery of Fast Moving Objects},
  booktitle = {CVPR},
  address = {Nashville, Tennessee, USA},
  month = jun,
  year = {2021}
}
Comments
  • Question about training set

    Question about training set

    Hi, thanks for your generous sharing.

    I have a question about training set generating in your work. I generated a training set following your codes. Its size is about 100GB, far less than 1TB. Is there anything wrong?

    Thanks.

    opened by fan-hd 11
  • Apply your model on custom longer video clips

    Apply your model on custom longer video clips

    Hi thank you for releasing your code,

    Can your model be applied on custom videos about high speed train crossing? Video clips last from 3 to 10 seconds, my idea was to preprocess them with your code in order to keep the same frame rate and have a better video quality for later object detection. This is an example frame from original video clip:

    vlcsnap-2021-05-25-15h27m32s030

    I tried to run your code on a video about 6 seconds and the result was a longer video (about 13min) with a lower level of detail, probably I'm doing something wrong. This is an example frame from output video clip:

    vlcsnap-2021-05-25-15h26m22s237

    How can I correctly reconstruct the quality of single frames usin all the information contained in the video?

    opened by fabiozappo 4
  • Question about comparison with Jin et al.'s work (CVPR2018)

    Question about comparison with Jin et al.'s work (CVPR2018)

    Hi, thank you for your interesting work! I have a question about the comparison of methods in your work. When making comparisons, did you retrain Jin et al.'s model ("Learning to Extract a Video Sequence from a Single Motion-Blurred Image" from CVPR 2018), or did you just use their pre-trained checkpoints? I couldn't find the training code on their github page.

    opened by zzh-tech 2
  • Padding in Time-Consistency Loss

    Padding in Time-Consistency Loss

    Hi,

    Congratulations!

    I found that "padding = tuple(side // 10 for side in sh[:2]) + (0,)" for normalized cross-correlation. Does it only implement padding to the height axis, since the padding tuple will be of size (4//10, H//10, 0)?

    Thanks a lot.

    opened by JLiu-Edinburgh 1
  • run on google colab!

    run on google colab!

    I'm confused! and need to run the code on google colab or more explanation about how to implement that code in vscode or something else .if it know someone please help me

    opened by ganikas 3
Releases(v1.0)
Owner
Denys Rozumnyi
PhD student at ETH Zurich.
Denys Rozumnyi
Part-Aware Data Augmentation for 3D Object Detection in Point Cloud

Part-Aware Data Augmentation for 3D Object Detection in Point Cloud This repository contains a reference implementation of our Part-Aware Data Augment

Jaeseok Choi 62 Jan 03, 2023
Implementation of Deformable Attention in Pytorch from the paper "Vision Transformer with Deformable Attention"

Deformable Attention Implementation of Deformable Attention from this paper in Pytorch, which appears to be an improvement to what was proposed in DET

Phil Wang 128 Dec 24, 2022
Official code for "Decoupling Zero-Shot Semantic Segmentation"

Decoupling Zero-Shot Semantic Segmentation This is the official code for the arxiv. ZegFormer is the first framework that decouple the zero-shot seman

Jian Ding 108 Dec 30, 2022
A PyTorch re-implementation of the paper 'Exploring Simple Siamese Representation Learning'. Reproduced the 67.8% Top1 Acc on ImageNet.

Exploring simple siamese representation learning This is a PyTorch re-implementation of the SimSiam paper on ImageNet dataset. The results match that

Taojiannan Yang 72 Nov 09, 2022
GluonMM is a library of transformer models for computer vision and multi-modality research

GluonMM is a library of transformer models for computer vision and multi-modality research. It contains reference implementations of widely adopted baseline models and also research work from Amazon

42 Dec 02, 2022
OpenMMLab Model Deployment Toolset

Introduction English | 简体中文 MMDeploy is an open-source deep learning model deployment toolset. It is a part of the OpenMMLab project. Major features F

OpenMMLab 1.5k Dec 30, 2022
Toolbox of models, callbacks, and datasets for AI/ML researchers.

Pretrained SOTA Deep Learning models, callbacks and more for research and production with PyTorch Lightning and PyTorch Website • Installation • Main

Pytorch Lightning 1.4k Dec 30, 2022
KakaoBrain KoGPT (Korean Generative Pre-trained Transformer)

KoGPT KoGPT (Korean Generative Pre-trained Transformer) https://github.com/kakaobrain/kogpt https://huggingface.co/kakaobrain/kogpt Model Descriptions

Kakao Brain 799 Dec 28, 2022
Official code for MPG2: Multi-attribute Pizza Generator: Cross-domain Attribute Control with Conditional StyleGAN

This is the official code for Multi-attribute Pizza Generator (MPG2): Cross-domain Attribute Control with Conditional StyleGAN. Paper Demo Setup Envir

Fangda Han 5 Sep 01, 2022
Simple STAC Catalogs discovery tool.

STAC Catalog Discovery Simple STAC discovery tool. Just paste the STAC Catalog link and press Enter. Details STAC Discovery tool enables discovering d

Mykola Kozyr 21 Oct 19, 2022
Improving Deep Network Debuggability via Sparse Decision Layers

Improving Deep Network Debuggability via Sparse Decision Layers This repository contains the code for our paper: Leveraging Sparse Linear Layers for D

Madry Lab 35 Nov 14, 2022
Deep Reinforced Attention Regression for Partial Sketch Based Image Retrieval.

DARP-SBIR Intro This repository contains the source code implementation for ICDM submission paper Deep Reinforced Attention Regression for Partial Ske

2 Jan 09, 2022
Multiview Dataset Toolkit

Multiview Dataset Toolkit Using multi-view cameras is a natural way to obtain a complete point cloud. However, there is to date only one multi-view 3D

11 Dec 22, 2022
Dealing With Misspecification In Fixed-Confidence Linear Top-m Identification

Dealing With Misspecification In Fixed-Confidence Linear Top-m Identification This repository is the official implementation of [Dealing With Misspeci

0 Oct 25, 2021
A cross-document event and entity coreference resolution system, trained and evaluated on the ECB+ corpus.

A Comprehensive Comparison of Word Embeddings in Event & Entity Coreference Resolution. Introduction This repo contains experimental code derived from

2 May 09, 2022
以孤立语假设和宽度优先搜索为基础,构建了一种多通道堆叠注意力Transformer结构的斗地主ai

ddz-ai 介绍 斗地主是一种扑克游戏。游戏最少由3个玩家进行,用一副54张牌(连鬼牌),其中一方为地主,其余两家为另一方,双方对战,先出完牌的一方获胜。 ddz-ai以孤立语假设和宽度优先搜索为基础,构建了一种多通道堆叠注意力Transformer结构的系统,使其经过大量训练后,能在实际游戏中获

freefuiiismyname 88 May 15, 2022
MapReader: A computer vision pipeline for the semantic exploration of maps at scale

MapReader A computer vision pipeline for the semantic exploration of maps at scale MapReader is an end-to-end computer vision (CV) pipeline designed b

Living with Machines 25 Dec 26, 2022
Keqing Chatbot With Python

KeqingChatbot A public running instance can be found on telegram as @keqingchat_bot. Requirements Python 3.8 or higher. A bot token. Local Deploy git

Rikka-Chan 2 Jan 16, 2022
Code for paper "Learning to Reweight Examples for Robust Deep Learning"

learning-to-reweight-examples Code for paper Learning to Reweight Examples for Robust Deep Learning. [arxiv] Environment We tested the code on tensorf

Uber Research 261 Jan 01, 2023
(IEEE TIP 2021) Regularized Densely-connected Pyramid Network for Salient Instance Segmentation

RDPNet IEEE TIP 2021: Regularized Densely-connected Pyramid Network for Salient Instance Segmentation PyTorch training and testing code are available.

Yu-Huan Wu 41 Oct 21, 2022