Exploit Camera Raw Data for Video Super-Resolution via Hidden Markov Model Inference

Related tags

Deep LearningRawVSR
Overview

RawVSR

This repo contains the official codes for our paper:

Exploit Camera Raw Data for Video Super-Resolution via Hidden Markov Model Inference

Xiaohong Liu, Kangdi Shi, Zhe Wang, Jun Chen

plot

Accepted in IEEE Transactions on Image Processing

[Paper Download] [Video]


Dependencies and Installation

  1. Clone repo

    $ git clone https://github.com/proteus1991/RawVSR.git
  2. Install dependent packages

    $ cd RawVSR
    $ pip install -r requirements.txt
  3. Setup the Deformable Convolution Network (DCN)

    Since our RawVSR use the DCN for feature alignment extracted from different video frames, we follow the setup in EDVR, where more details can be found.

    $ python setup.py develop

    Note that the deform_conv_cuda.cpp and deform_conv_cuda_kernel.cu have been modified to solve compile errors in PyTorch >= 1.7.0. If your PyTorch version < 1.7.0, you may need to download the original setup code.

Introduction

  • train.py and test.py are the entry codes for training and testing the RawVSR.
  • ./data/ contains the codes for data loading.
  • ./dataset/ contains the corresponding video sequences.
  • ./dcn/ is the dependencies of DCN.
  • ./models/ contains the codes to define the network.
  • ./utils/ includes the utilities.
  • ./weight_checkpoint/ saves checkpoints and the best network weight.

Raw Video Dataset (RawVD)

Since we are not aware of the existence of publicly available raw video datasets, to train our RawVSR, a raw video dataset dubbled as RawVD is built. plot

In this dataset, we provide the ground-truth sRGB frames in folder 1080p_gt_rgb. Low-resolution (LR) Raw frames are in folder 1080p_lr_d_raw_2 and 1080p_lr_d_raw_4 in terms of different scale ratios. Their corresponding sRGB frames are in folder 1080p_lr_d_rgb_2 and 1080p_lr_d_rgb_4, where d in folder name stands for the degradations including defocus blurring and heteroscedastic Gaussian noise. We also released the original raw videos in Magic Lantern Video (MLV) format. The corresponding software to play it can be found here. Details can be found in Section 3 of our paper.

Quick Start

1. Testing

Make sure all dependencies are successfully installed.

Run test.py with --scale_ratio and save_image tags.

$ python test.py --scale_ratio 4 --save_image

The help of --scale_ratio and save_image tags is shown by running:

$ python test.py -h

If everything goes well, the following messages will appear in your bash:

--- Hyper-parameter default settings ---
train settings:
 {'dataroot_GT': '/media/lxh/SSD_DATA/raw_test/gt/1080p/1080p_gt_rgb', 'dataroot_LQ': '/media/lxh/SSD_DATA/raw_test/w_d/1080p/1080p_lr_d_raw_4', 'lr': 0.0002, 'num_epochs': 100, 'N_frames': 7, 'n_workers': 12, 'batch_size': 24, 'GT_size': 256, 'LQ_size': 64, 'scale': 4, 'phase': 'train'}
val settings:
 {'dataroot_GT': '/media/lxh/SSD_DATA/raw_test/gt/1080p/1080p_gt_rgb', 'dataroot_LQ': '/media/lxh/SSD_DATA/raw_test/w_d/1080p/1080p_lr_d_raw_4', 'N_frames': 7, 'n_workers': 12, 'batch_size': 2, 'phase': 'val', 'save_image': True}
network settings:
 {'nf': 64, 'nframes': 7, 'groups': 8, 'back_RBs': 4}
dataset settings:
 {'dataset_name': 'RawVD'}
--- testing results ---
store: 29.04dB
painting: 29.02dB
train: 28.59dB
city: 29.08dB
tree: 28.06dB
avg_psnr: 28.76dB
--- end ---

The RawVSR is tested on our elaborately-collected RawVD. Here the PSNR results should be the same as Table 1 in our paper.

2. Training

Run train.py without --save_image tag to reduce the training time.

$ python train.py --scale_ratio 4

If you want to change the default hyper-parameters (e.g., modifying the batch_size), simply go config.py. All network and training/testing settings are stored there.

Acknowledgement

Some codes (e.g., DCN) are borrowed from EDVR with modification.

Cite

If you use this code, please kindly cite

@article{liu2020exploit,
  title={Exploit Camera Raw Data for Video Super-Resolution via Hidden Markov Model Inference},
  author={Liu, Xiaohong and Shi, Kangdi and Wang, Zhe and Chen, Jun},
  journal={arXiv preprint arXiv:2008.10710},
  year={2020}
}

Contact

Should you have any question about this code, please open a new issue directly. For any other questions, you might contact me in email: [email protected].

Owner
Xiaohong Liu
Xiaohong Liu
Implicit Deep Adaptive Design (iDAD)

Implicit Deep Adaptive Design (iDAD) This code supports the NeurIPS paper 'Implicit Deep Adaptive Design: Policy-Based Experimental Design without Lik

Desi 12 Aug 14, 2022
Free Book about Deep-Learning approaches for Chess (like AlphaZero, Leela Chess Zero and Stockfish NNUE)

Free Book about Deep-Learning approaches for Chess (like AlphaZero, Leela Chess Zero and Stockfish NNUE)

Dominik Klein 189 Dec 21, 2022
XtremeDistil framework for distilling/compressing massive multilingual neural network models to tiny and efficient models for AI at scale

XtremeDistilTransformers for Distilling Massive Multilingual Neural Networks ACL 2020 Microsoft Research [Paper] [Video] Releasing [XtremeDistilTransf

Microsoft 125 Jan 04, 2023
Implementation for paper "STAR: A Structure-aware Lightweight Transformer for Real-time Image Enhancement" (ICCV 2021).

STAR-pytorch Implementation for paper "STAR: A Structure-aware Lightweight Transformer for Real-time Image Enhancement" (ICCV 2021). CVF (pdf) STAR-DC

43 Dec 21, 2022
The first dataset on shadow generation for the foreground object in real-world scenes.

Object-Shadow-Generation-Dataset-DESOBA Object Shadow Generation is to deal with the shadow inconsistency between the foreground object and the backgr

BCMI 105 Dec 30, 2022
Code for the paper "Offline Reinforcement Learning as One Big Sequence Modeling Problem"

Trajectory Transformer Code release for Offline Reinforcement Learning as One Big Sequence Modeling Problem. Installation All python dependencies are

Michael Janner 266 Dec 27, 2022
Implementation of paper "Self-supervised Learning on Graphs:Deep Insights and New Directions"

SelfTask-GNN A PyTorch implementation of "Self-supervised Learning on Graphs: Deep Insights and New Directions". [paper] In this paper, we first deepe

Wei Jin 85 Oct 13, 2022
Leveraging OpenAI's Codex to solve cornerstone problems in Music

Music-Codex Leveraging OpenAI's Codex to solve cornerstone problems in Music Please NOTE: Presented generated samples were created by OpenAI's Codex P

Alex 2 Mar 11, 2022
Technical experimentations to beat the stock market using deep learning :chart_with_upwards_trend:

DeepStock Technical experimentations to beat the stock market using deep learning. Experimentations Deep Learning Stock Prediction with Daily News Hea

Keon 449 Dec 29, 2022
Implementation of FSGNN

FSGNN Implementation of FSGNN. For more details, please refer to our paper Experiments were conducted with following setup: Pytorch: 1.6.0 Python: 3.8

19 Dec 05, 2022
CodeContests is a competitive programming dataset for machine-learning

CodeContests CodeContests is a competitive programming dataset for machine-learning. This dataset was used when training AlphaCode. It consists of pro

DeepMind 1.6k Jan 08, 2023
Official Pytorch implementation of 'RoI Tanh-polar Transformer Network for Face Parsing in the Wild.'

Official Pytorch implementation of 'RoI Tanh-polar Transformer Network for Face Parsing in the Wild.'

Jie Shen 125 Jan 08, 2023
Bayesian inference for Permuton-induced Chinese Restaurant Process (NeurIPS2021).

Permuton-induced Chinese Restaurant Process Note: Currently only the Matlab version is available, but a Python version will be available soon! This is

NTT Communication Science Laboratories 3 Dec 17, 2022
Exploration of some patients clinical variables.

Answer_ALS_clinical_data Exploration of some patients clinical variables. All the clinical / metadata data is available here: https://data.answerals.o

1 Jan 20, 2022
Back to Event Basics: SSL of Image Reconstruction for Event Cameras

Back to Event Basics: SSL of Image Reconstruction for Event Cameras Minimal code for Back to Event Basics: Self-Supervised Learning of Image Reconstru

TU Delft 42 Dec 26, 2022
Select, weight and analyze complex sample data

Sample Analytics In large-scale surveys, often complex random mechanisms are used to select samples. Estimates derived from such samples must reflect

samplics 37 Dec 15, 2022
Code release for "BoxeR: Box-Attention for 2D and 3D Transformers"

BoxeR By Duy-Kien Nguyen, Jihong Ju, Olaf Booij, Martin R. Oswald, Cees Snoek. This repository is an official implementation of the paper BoxeR: Box-A

Nguyen Duy Kien 111 Dec 07, 2022
Pytorch implementation of COIN, a framework for compression with implicit neural representations 🌸

COIN 🌟 This repo contains a Pytorch implementation of COIN: COmpression with Implicit Neural representations, including code to reproduce all experim

Emilien Dupont 104 Dec 14, 2022
Code and results accompanying our paper titled Mixture Proportion Estimation and PU Learning: A Modern Approach at Neurips 2021 (Spotlight)

Mixture Proportion Estimation and PU Learning: A Modern Approach This repository is the official implementation of Mixture Proportion Estimation and P

Approximately Correct Machine Intelligence (ACMI) Lab 23 Dec 28, 2022
StackGAN: Text to Photo-realistic Image Synthesis with Stacked Generative Adversarial Networks

StackGAN Pytorch implementation Inception score evaluation StackGAN-v2-pytorch Tensorflow implementation for reproducing main results in the paper Sta

Han Zhang 1.8k Dec 21, 2022