PyTorch code for 'Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning'

Related tags

Deep LearningEMSRDPN
Overview

Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning

This repository is for EMSRDPN introduced in the following paper

Bin-Cheng Yang and Gangshan Wu, "Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning", [arxiv]

It's an extension to a conference paper

Bin-Cheng Yang. 2019. Super Resolution Using Dual Path Connections. In Proceedings of the 27th ACM International Conference on Multimedia (MM ’19), October 21–25, 2019, Nice, France. ACM, NewYork, NY, USA, 9 pages. https://doi.org/10.1145/3343031.3350878

The code is built on EDSR (PyTorch) and tested on Ubuntu 16.04 environment (Python3.7, PyTorch_1.1.0, CUDA9.0) with Titan X/Xp/V100 GPUs.

Contents

  1. Introduction
  2. Train
  3. Test
  4. Results
  5. Citation
  6. Acknowledgements

Introduction

Deep convolutional neural networks have been demonstrated to be effective for SISR in recent years. On the one hand, residual connections and dense connections have been used widely to ease forward information and backward gradient flows to boost performance. However, current methods use residual connections and dense connections separately in most network layers in a sub-optimal way. On the other hand, although various networks and methods have been designed to improve computation efficiency, save parameters, or utilize training data of multiple scale factors for each other to boost performance, it either do super-resolution in HR space to have a high computation cost or can not share parameters between models of different scale factors to save parameters and inference time. To tackle these challenges, we propose an efficient single image super-resolution network using dual path connections with multiple scale learning named as EMSRDPN. By introducing dual path connections inspired by Dual Path Networks into EMSRDPN, it uses residual connections and dense connections in an integrated way in most network layers. Dual path connections have the benefits of both reusing common features of residual connections and exploring new features of dense connections to learn a good representation for SISR. To utilize the feature correlation of multiple scale factors, EMSRDPN shares all network units in LR space between different scale factors to learn shared features and only uses a separate reconstruction unit for each scale factor, which can utilize training data of multiple scale factors to help each other to boost performance, meanwhile which can save parameters and support shared inference for multiple scale factors to improve efficiency. Experiments show EMSRDPN achieves better performance and comparable or even better parameter and inference efficiency over SOTA methods.

Train

Prepare training data

  1. Download DIV2K training data (800 training images for x2, x3, x4 and x8) from DIV2K dataset and Flickr2K training data (2650 training images) from Flickr2K dataset.

  2. Untar the download files.

  3. Using src/generate_LR_x8.m to generate x8 LR data for Flickr2K dataset, you need to modify 'folder' in src/generate_LR_x8.m to your directory to place Flickr2K dataset.

  4. Specify '--dir_data' in src/option.py to your directory to place DIV2K and Flickr2K datasets.

For more informaiton, please refer to EDSR(PyTorch).

Begin to train

  1. Cd to 'src', run the following scripts to train models.

    You can use scripts in file 'demo.sh' to train models for our paper.

    To train a fresh model using DIV2K dataset

    CUDA_VISIBLE_DEVICES=0,1 python3.7 main.py --scale 2+3+4+8 --test_scale 2+3+4+8 --save EMSRDPN_BIx2348 --model EMSRDPN --epochs 5000 --batch_size 16 --patch_size 48 --n_GPUs 2 --n_threads 16 --SRDPNconfig A --ext sep --data_test Set5 --reset --decay 1000-2000-3000-4000-5000 --lr_patch_size --data_range 1-3450 --data_train DIV2K

    To train a fresh model using Flickr2K dataset

    CUDA_VISIBLE_DEVICES=0,1 python3.7 main.py --scale 2+3+4+8 --test_scale 2+3+4+8 --save EMSRDPN_BIx2348 --model EMSRDPN --epochs 5000 --batch_size 16 --patch_size 48 --n_GPUs 2 --n_threads 16 --SRDPNconfig A --ext sep --data_test Set5 --reset --decay 1000-2000-3000-4000-5000 --lr_patch_size --data_range 1-3450 --data_train Flickr2K

    To train a fresh model using both DIV2K and Flickr2K datasets to reproduce results in the paper, you need copy all the files in DIV2K_HR/ to Flickr2K_HR/, copy all the directories in DIV2K_LR_bicubic/ to Flickr2K_LR_bicubic/, then using the following script

    CUDA_VISIBLE_DEVICES=0,1 python3.7 main.py --scale 2+3+4+8 --test_scale 2+3+4+8 --save EMSRDPN_BIx2348 --model EMSRDPN --epochs 5000 --batch_size 16 --patch_size 48 --n_GPUs 2 --n_threads 16 --SRDPNconfig A --ext sep --data_test Set5 --reset --decay 1000-2000-3000-4000-5000 --lr_patch_size --data_range 1-3450 --data_train Flickr2K

    To continue a unfinished model using DIV2K dataset, the processes for other datasets are similiar

    CUDA_VISIBLE_DEVICES=0,1 python3.7 main.py --scale 2+3+4+8 --test_scale 2+3+4+8 --save EMSRDPN_BIx2348 --model EMSRDPN --epochs 5000 --batch_size 16 --patch_size 48 --n_GPUs 2 --n_threads 16 --SRDPNconfig A --ext sep --data_test Set5 --resume -1 --decay 1000-2000-3000-4000-5000 --lr_patch_size --data_range 1-3450 --data_train DIV2K --load EMSRDPN_BIx2348

Test

Quick start

  1. Download benchmark dataset from BaiduYun (access code: 20v5), place them in directory specified by '--dir_data' in src/option.py, untar it.

  2. Download EMSRDPN model for our paper from BaiduYun (access code: d2ov) and place them in 'experiment/'. Other multiple scale models can be downloaded from BaiduYun (access code: z5ey).

  3. Cd to 'src', run the following scripts to test downloaded EMSRDPN model.

    You can use scripts in file 'demo.sh' to produce results for our paper.

    To test a trained model

    CUDA_VISIBLE_DEVICES=0 python3.7 main.py --scale 2+3+4+8 --test_scale 2+3+4+8 --save EMSRDPN_BIx2348_test --model EMSRDPN --epochs 5000 --batch_size 16 --patch_size 48 --n_GPUs 1 --n_threads 16 --SRDPNconfig A --ext sep --data_test Set5+Set14+B100+Urban100+Manga109 --reset --decay 1000-2000-3000-4000-5000 --lr_patch_size --data_range 1-3450 --data_train DIV2K --pre_train ../experiment/EMSRDPN_BIx2348.pt --test_only --save_results

    To test a trained model using self ensemble

    CUDA_VISIBLE_DEVICES=0 python3.7 main.py --scale 2+3+4+8 --test_scale 2+3+4+8 --save EMSRDPN_BIx2348_test+ --model EMSRDPN --epochs 5000 --batch_size 16 --patch_size 48 --n_GPUs 1 --n_threads 16 --SRDPNconfig A --ext sep --data_test Set5+Set14+B100+Urban100+Manga109 --reset --decay 1000-2000-3000-4000-5000 --lr_patch_size --data_range 1-3450 --data_train DIV2K --pre_train ../experiment/EMSRDPN_BIx2348.pt --test_only --save_results --self_ensemble

    To test a trained model using multiple scale infer

    CUDA_VISIBLE_DEVICES=0 python3.7 main.py --scale 2+3+4+8 --test_scale 2+3+4+8 --save EMSRDPN_BIx2348_test_multi_scale_infer --model EMSRDPN --epochs 5000 --batch_size 16 --patch_size 48 --n_GPUs 1 --n_threads 16 --SRDPNconfig A --ext sep --data_test Set5 --reset --decay 1000-2000-3000-4000-5000 --lr_patch_size --data_range 1-3450 --data_train DIV2K --pre_train ../experiment/EMSRDPN_BIx2348.pt --test_only --save_results --multi_scale_infer

Results

All the test results can be download from BaiduYun (access code: oawz).

Citation

If you find the code helpful in your resarch or work, please cite the following papers.

@InProceedings{Lim_2017_CVPR_Workshops,
  author = {Lim, Bee and Son, Sanghyun and Kim, Heewon and Nah, Seungjun and Lee, Kyoung Mu},
  title = {Enhanced Deep Residual Networks for Single Image Super-Resolution},
  booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
  month = {July},
  year = {2017}
}

@inproceedings{2019Super,
  title={Super Resolution Using Dual Path Connections},
  author={ Yang, Bin Cheng },
  booktitle={the 27th ACM International Conference},
  year={2019},
}

@misc{yang2021efficient,
      title={Efficient Single Image Super-Resolution Using Dual Path Connections with Multiple Scale Learning}, 
      author={Bin-Cheng Yang and Gangshan Wu},
      year={2021},
      eprint={2112.15386},
      archivePrefix={arXiv},
      primaryClass={eess.IV}
}

Acknowledgements

This code is built on EDSR (PyTorch). We thank the authors for sharing their code.

Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases.

Ivy is a templated deep learning framework which maximizes the portability of deep learning codebases. Ivy wraps the functional APIs of existing frameworks. Framework-agnostic functions, libraries an

Ivy 8.2k Jan 02, 2023
FLAVR is a fast, flow-free frame interpolation method capable of single shot multi-frame prediction

FLAVR is a fast, flow-free frame interpolation method capable of single shot multi-frame prediction. It uses a customized encoder decoder architecture with spatio-temporal convolutions and channel ga

Tarun K 280 Dec 23, 2022
Augmented Traffic Control: A tool to simulate network conditions

Augmented Traffic Control Full documentation for the project is available at http://facebook.github.io/augmented-traffic-control/. Overview Augmented

Meta Archive 4.3k Jan 08, 2023
The code for the NeurIPS 2021 paper "A Unified View of cGANs with and without Classifiers".

Energy-based Conditional Generative Adversarial Network (ECGAN) This is the code for the NeurIPS 2021 paper "A Unified View of cGANs with and without

sianchen 22 May 28, 2022
Code for the ICML 2021 paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision"

ViLT Code for the paper: "ViLT: Vision-and-Language Transformer Without Convolution or Region Supervision" Install pip install -r requirements.txt pip

Wonjae Kim 922 Jan 01, 2023
FindFunc is an IDA PRO plugin to find code functions that contain a certain assembly or byte pattern, reference a certain name or string, or conform to various other constraints.

FindFunc: Advanced Filtering/Finding of Functions in IDA Pro FindFunc is an IDA Pro plugin to find code functions that contain a certain assembly or b

213 Dec 17, 2022
The codebase for our paper "Generative Occupancy Fields for 3D Surface-Aware Image Synthesis" (NeurIPS 2021)

Generative Occupancy Fields for 3D Surface-Aware Image Synthesis (NeurIPS 2021) Project Page | Paper Xudong Xu, Xingang Pan, Dahua Lin and Bo Dai GOF

xuxudong 97 Nov 10, 2022
PyTorch implementation of "Representing Shape Collections with Alignment-Aware Linear Models" paper.

deep-linear-shapes PyTorch implementation of "Representing Shape Collections with Alignment-Aware Linear Models" paper. If you find this code useful i

Romain Loiseau 27 Sep 24, 2022
Code and data for the EMNLP 2021 paper "Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts". Coming soon!

ToxiChat Code and data for the EMNLP 2021 paper "Just Say No: Analyzing the Stance of Neural Dialogue Generation in Offensive Contexts". Install depen

Ashutosh Baheti 11 Jan 01, 2023
Arxiv harvester - Poor man's simple harvester for arXiv resources

Poor man's simple harvester for arXiv resources This modest Python script takes

Patrice Lopez 5 Oct 18, 2022
EdiBERT is a generative model based on a bi-directional transformer, suited for image manipulation

EdiBERT, a generative model for image editing EdiBERT is a generative model based on a bi-directional transformer, suited for image manipulation. The

16 Dec 07, 2022
The all new way to turn your boring vector meshes into the new fad in town; Voxels!

Voxelator The all new way to turn your boring vector meshes into the new fad in town; Voxels! Notes: I have not tested this on a rotated mesh. With fu

6 Feb 03, 2022
Automatic learning-rate scheduler

AutoLRS This is the PyTorch code implementation for the paper AutoLRS: Automatic Learning-Rate Schedule by Bayesian Optimization on the Fly published

Yuchen Jin 33 Nov 18, 2022
Detecting Human-Object Interactions with Object-Guided Cross-Modal Calibrated Semantics

[AAAI2022] Detecting Human-Object Interactions with Object-Guided Cross-Modal Calibrated Semantics Overall pipeline of OCN. Paper Link: [arXiv] [AAAI

13 Nov 21, 2022
Implementation of 'lightweight' GAN, proposed in ICLR 2021, in Pytorch. High resolution image generations that can be trained within a day or two

512x512 flowers after 12 hours of training, 1 gpu 256x256 flowers after 12 hours of training, 1 gpu Pizza 'Lightweight' GAN Implementation of 'lightwe

Phil Wang 1.5k Jan 02, 2023
CTF Challenge for CSAW Finals 2021

Terminal Velocity Misc CTF Challenge for CSAW Finals 2021 This is a challenge I've had in mind for almost 15 years and never got around to building un

Jordan 6 Jul 30, 2022
PoolFormer: MetaFormer is Actually What You Need for Vision

PoolFormer: MetaFormer is Actually What You Need for Vision (arXiv) This is a PyTorch implementation of PoolFormer proposed by our paper "MetaFormer i

Sea AI Lab 1k Dec 30, 2022
YoHa - A practical hand tracking engine.

YoHa - A practical hand tracking engine.

2k Jan 06, 2023
CellRank's reproducibility repository.

CellRank's reproducibility repository We believe that reproducibility is key and have made it as simple as possible to reproduce our results. Please e

Theis Lab 8 Oct 08, 2022
Official implementation of "Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks", NeurIPS 2021.

PHDimGeneralization Official implementation of "Intrinsic Dimension, Persistent Homology and Generalization in Neural Networks", NeurIPS 2021. Overvie

Tolga Birdal 13 Nov 08, 2022