Exploring Image Deblurring via Blur Kernel Space (CVPR'21)

Overview

Exploring Image Deblurring via Encoded Blur Kernel Space

About the project

We introduce a method to encode the blur operators of an arbitrary dataset of sharp-blur image pairs into a blur kernel space. Assuming the encoded kernel space is close enough to in-the-wild blur operators, we propose an alternating optimization algorithm for blind image deblurring. It approximates an unseen blur operator by a kernel in the encoded space and searches for the corresponding sharp image. Due to the method's design, the encoded kernel space is fully differentiable, thus can be easily adopted in deep neural network models.

Blur kernel space

Detail of the method and experimental results can be found in our following paper:

@inproceedings{m_Tran-etal-CVPR21, 
  author = {Phong Tran and Anh Tran and Quynh Phung and Minh Hoai}, 
  title = {Explore Image Deblurring via Encoded Blur Kernel Space}, 
  year = {2021}, 
  booktitle = {Proceedings of the {IEEE} Conference on Computer Vision and Pattern Recognition (CVPR)} 
}

Please CITE our paper whenever this repository is used to help produce published results or incorporated into other software.

Open In Colab

Table of Content

Getting started

Prerequisites

  • Python >= 3.7
  • Pytorch >= 1.4.0
  • CUDA >= 10.0

Installation

git clone https://github.com/VinAIResearch/blur-kernel-space-exploring.git
cd blur-kernel-space-exploring


conda create -n BlurKernelSpace -y python=3.7
conda activate BlurKernelSpace
conda install --file requirements.txt

Training and evaluation

Preparing datasets

You can download the datasets in the model zoo section.

To use your customized dataset, your dataset must be organized as follow:

root
├── blur_imgs
    ├── 000
    ├──── 00000000.png
    ├──── 00000001.png
    ├──── ...
    ├── 001
    ├──── 00000000.png
    ├──── 00000001.png
    ├──── ...
├── sharp_imgs
    ├── 000
    ├──── 00000000.png
    ├──── 00000001.png
    ├──── ...
    ├── 001
    ├──── 00000000.png
    ├──── 00000001.png
    ├──── ...

where root, blur_imgs, and sharp_imgs folders can have arbitrary names. For example, let root, blur_imgs, sharp_imgs be REDS, train_blur, train_sharp respectively (That is, you are using the REDS training set), then use the following scripts to create the lmdb dataset:

python create_lmdb.py --H 720 --W 1280 --C 3 --img_folder REDS/train_sharp --name train_sharp_wval --save_path ../datasets/REDS/train_sharp_wval.lmdb
python create_lmdb.py --H 720 --W 1280 --C 3 --img_folder REDS/train_blur --name train_blur_wval --save_path ../datasets/REDS/train_blur_wval.lmdb

where (H, C, W) is the shape of the images (note that all images in the dataset must have the same shape), img_folder is the folder that contains the images, name is the name of the dataset, and save_path is the save destination (save_path must end with .lmdb).

When the script is finished, two folders train_sharp_wval.lmdb and train_blur_wval.lmdb will be created in ./REDS.

Training

To do image deblurring, data augmentation, and blur generation, you first need to train the blur encoding network (The F function in the paper). This is the only network that you need to train. After creating the dataset, change the value of dataroot_HQ and dataroot_LQ in options/kernel_encoding/REDS/woVAE.yml to the paths of the sharp and blur lmdb datasets that were created before, then use the following script to train the model:

python train.py -opt options/kernel_encoding/REDS/woVAE.yml

where opt is the path to yaml file that contains training configurations. You can find some default configurations in the options folder. Checkpoints, training states, and logs will be saved in experiments/modelName. You can change the configurations (learning rate, hyper-parameters, network structure, etc) in the yaml file.

Testing

Data augmentation

To augment a given dataset, first, create an lmdb dataset using scripts/create_lmdb.py as before. Then use the following script:

python data_augmentation.py --target_H=720 --target_W=1280 \
			    --source_H=720 --source_W=1280\
			    --augmented_H=256 --augmented_W=256\
                            --source_LQ_root=datasets/REDS/train_blur_wval.lmdb \
                            --source_HQ_root=datasets/REDS/train_sharp_wval.lmdb \
			    --target_HQ_root=datasets/REDS/test_sharp_wval.lmdb \
                            --save_path=results/GOPRO_augmented \
                            --num_images=10 \
                            --yml_path=options/data_augmentation/default.yml

(target_H, target_W), (source_H, source_W), and (augmented_H, augmented_W) are the desired shapes of the target images, source images, and augmented images respectively. source_LQ_root, source_HQ_root, and target_HQ_root are the paths of the lmdb datasets for the reference blur-sharp pairs and the input sharp images that were created before. num_images is the size of the augmented dataset. model_path is the path of the trained model. yml_path is the path to the model configuration file. Results will be saved in save_path.

Data augmentation examples

Generate novel blur kernels

To generate a blur image given a sharp image, use the following command:

python generate_blur.py --yml_path=options/generate_blur/default.yml \
		        --image_path=imgs/sharp_imgs/mushishi.png \
			--num_samples=10
			--save_path=./res.png

where model_path is the path of the pre-trained model, yml_path is the path of the configuration file. image_path is the path of the sharp image. After running the script, a blur image corresponding to the sharp image will be saved in save_path. Here is some expected output: kernel generating examples Note: This only works with models that were trained with --VAE flag. The size of input images must be divisible by 128.

Generic Deblurring

To deblur a blurry image, use the following command:

python generic_deblur.py --image_path imgs/blur_imgs/blur1.png --yml_path options/generic_deblur/default.yml --save_path ./res.png

where image_path is the path of the blurry image. yml_path is the path of the configuration file. The deblurred image will be saved to save_path.

Image deblurring examples

Deblurring using sharp image prior

First, you need to download the pre-trained styleGAN or styleGAN2 networks. If you want to use styleGAN, download the mapping and synthesis networks, then rename and copy them to experiments/pretrained/stylegan_mapping.pt and experiments/pretrained/stylegan_synthesis.pt respectively. If you want to use styleGAN2 instead, download the pretrained model, then rename and copy it to experiments/pretrained/stylegan2.pt.

To deblur a blurry image using styleGAN latent space as the sharp image prior, you can use one of the following commands:

python domain_specific_deblur.py --input_dir imgs/blur_faces \
		    --output_dir experiments/domain_specific_deblur/results \
		    --yml_path options/domain_specific_deblur/stylegan.yml  # Use latent space of stylegan
python domain_specific_deblur.py --input_dir imgs/blur_faces \
		    --output_dir experiments/domain_specific_deblur/results \
		    --yml_path options/domain_specific_deblur/stylegan2.yml  # Use latent space of stylegan2

Results will be saved in experiments/domain_specific_deblur/results. Note: Generally, the code still works with images that have the size divisible by 128. However, since our blur kernels are not uniform, the size of the kernel increases as the size of the image increases.

PULSE-like Deblurring examples

Model Zoo

Pretrained models and corresponding datasets are provided in the below table. After downloading the datasets and models, follow the instructions in the testing section to do data augmentation, generating blur images, or image deblurring.

Model name dataset(s) status
REDS woVAE REDS ✔️
GOPRO woVAE GOPRO ✔️
GOPRO wVAE GOPRO ✔️
GOPRO + REDS woVAE GOPRO, REDS ✔️

Notes and references

The training code is borrowed from the EDVR project: https://github.com/xinntao/EDVR

The backbone code is borrowed from the DeblurGAN project: https://github.com/KupynOrest/DeblurGAN

The styleGAN code is borrowed from the PULSE project: https://github.com/adamian98/pulse

The stylegan2 code is borrowed from https://github.com/rosinality/stylegan2-pytorch

Owner
VinAI Research
VinAI Research
The NEOSSat is a dual-mission microsatellite designed to detect potentially hazardous Earth-orbit-crossing asteroids and track objects that reside in deep space

The NEOSSat is a dual-mission microsatellite designed to detect potentially hazardous Earth-orbit-crossing asteroids and track objects that reside in deep space

John Salib 2 Jan 30, 2022
PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud, CVPR 2019.

PointRCNN PointRCNN: 3D Object Proposal Generation and Detection from Point Cloud Code release for the paper PointRCNN:3D Object Proposal Generation a

Shaoshuai Shi 1.5k Dec 27, 2022
Aalto-cs-msc-theses - Listing of M.Sc. Theses of the Department of Computer Science at Aalto University

Aalto-CS-MSc-Theses Listing of M.Sc. Theses of the Department of Computer Scienc

Jorma Laaksonen 3 Jan 27, 2022
PyTorch implementation of ICLR 2022 paper PiCO: Contrastive Label Disambiguation for Partial Label Learning

PiCO: Contrastive Label Disambiguation for Partial Label Learning This is a PyTorch implementation of ICLR 2022 Oral paper PiCO; also see our Project

王皓波 147 Jan 07, 2023
Python lib to talk to pylontech lithium batteries (US2000, US3000, ...) using RS485

python-pylontech Python lib to talk to pylontech lithium batteries (US2000, US3000, ...) using RS485 What is this lib ? This lib is meant to talk to P

Frank 26 Dec 28, 2022
Code and dataset for AAAI 2021 paper FixMyPose: Pose Correctional Describing and Retrieval Hyounghun Kim, Abhay Zala, Graham Burri, Mohit Bansal.

FixMyPose / फिक्समाइपोज़ Code and dataset for AAAI 2021 paper "FixMyPose: Pose Correctional Describing and Retrieval" Hyounghun Kim*, Abhay Zala*, Grah

4 Sep 19, 2022
Symbolic Music Generation with Diffusion Models

Symbolic Music Generation with Diffusion Models Supplementary code release for our work Symbolic Music Generation with Diffusion Models. Installation

Magenta 119 Jan 07, 2023
Sign Language is detected in realtime using video sequences. Our approach involves MediaPipe Holistic for keypoints extraction and LSTM Model for prediction.

RealTime Sign Language Detection using Action Recognition Approach Real-Time Sign Language is commonly predicted using models whose architecture consi

Rishikesh S 15 Aug 20, 2022
LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models

LaneDet is an open source lane detection toolbox based on PyTorch that aims to pull together a wide variety of state-of-the-art lane detection models. Developers can reproduce these SOTA methods and

TuZheng 405 Jan 04, 2023
ByteTrack(Multi-Object Tracking by Associating Every Detection Box)のPythonでのONNX推論サンプル

ByteTrack-ONNX-Sample ByteTrack(Multi-Object Tracking by Associating Every Detection Box)のPythonでのONNX推論サンプルです。 ONNXに変換したモデルも同梱しています。 変換自体を試したい方はByteT

KazuhitoTakahashi 16 Oct 26, 2022
masscan + nmap + Finger

说明 个人根据使用习惯修改masnmap而来的一个小工具。调用masscan做全端口扫描,再调用nmap做服务识别,最后调用Finger做Web指纹识别。工具使用场景适合风险探测排查、众测等。 使用方法 安装依赖 pip3 install -r requirements.txt -i https:/

Ryan 3 Mar 25, 2022
Code and data accompanying our SVRHM'21 paper.

Code and data accompanying our SVRHM'21 paper. Requires tensorflow 1.13, python 3.7, scikit-learn, and pytorch 1.6.0 to be installed. Python scripts i

5 Nov 17, 2021
Segmentation Training Pipeline

Segmentation Training Pipeline This package is a part of Musket ML framework. Reasons to use Segmentation Pipeline Segmentation Pipeline was developed

Musket ML 52 Dec 12, 2022
This is an open source python repository for various python tests

Welcome to Py-tests This is an open source python repository for various python tests. This is in response to the hacktoberfest2021 challenge. It is a

Yada Martins Tisan 3 Oct 31, 2021
[SIGGRAPH Asia 2019] Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning

AGIS-Net Introduction This is the official PyTorch implementation of the Artistic Glyph Image Synthesis via One-Stage Few-Shot Learning. paper | suppl

Yue Gao 102 Jan 02, 2023
Pytorch implementations of popular off-policy multi-agent reinforcement learning algorithms, including QMix, VDN, MADDPG, and MATD3.

Off-Policy Multi-Agent Reinforcement Learning (MARL) Algorithms This repository contains implementations of various off-policy multi-agent reinforceme

183 Dec 28, 2022
ImageNet Adversarial Image Evaluation

ImageNet Adversarial Image Evaluation This repository contains the code and some materials used in the experimental work presented in the following pa

Utku Ozbulak 11 Dec 26, 2022
An Agnostic Computer Vision Framework - Pluggable to any Training Library: Fastai, Pytorch-Lightning with more to come

IceVision is the first agnostic computer vision framework to offer a curated collection with hundreds of high-quality pre-trained models from torchvision, MMLabs, and soon Pytorch Image Models. It or

airctic 789 Dec 29, 2022
An OpenAI-Gym Package for Training and Testing Reinforcement Learning algorithms with OpenSim Models

Authors: Utkarsh A. Mishra and Dr. Dimitar Stanev Advisors: Dr. Dimitar Stanev and Prof. Auke Ijspeert, Biorobotics Laboratory (BioRob), EPFL Video Pl

Utkarsh Mishra 16 Dec 13, 2022
A general-purpose programming language, focused on simplicity, safety and stability.

The Rivet programming language A general-purpose programming language, focused on simplicity, safety and stability. Rivet's goal is to be a very power

The Rivet programming language 17 Dec 29, 2022