PyTorch code for our paper "Gated Multiple Feedback Network for Image Super-Resolution" (BMVC2019)

Related tags

Deep LearningGMFN
Overview

Gated Multiple Feedback Network for Image Super-Resolution

This repository contains the PyTorch implementation for the proposed GMFN [arXiv].

The framework of our proposed GMFN. The colored arrows among different time steps denote the multiple feedback connections. The high-level information carried by them helps low-level features become more representative.

Demo

Clone SRFBN as the backbone and satisfy its requirements.

Test

  1. Copy ./networks/gmfn_arch.py into SRFBN_CVPR19/networks/

  2. Download the pre-trained models from Google driver or Baidu Netdisk, unzip and place them into SRFBN_CVPR19/models.

  3. Copy ./options/test/ to SRFBN_CVPR19/options/test/.

  4. Run commands cd SRFBN_CVPR19 and one of followings for evaluation on Set5:

python test.py -opt options/test/test_GMFN_x2.json
python test.py -opt options/test/test_GMFN_x3.json
python test.py -opt options/test/test_GMFN_x4.json
  1. Finally, PSNR/SSIM values for Set5 are shown on your screen, you can find the reconstruction images in ./results.

To test GMFN on other standard SR benchmarks or your own images, please refer to the instruction in SRFBN.

Train

  1. Prepare the training set according to this (1-3).
  2. Modify ./options/train/train_GMFN.json by following the instructions in ./options/train/README.md.
  3. Run commands:
cd SRFBN_CVPR19
python train.py -opt options/train/train_GNFN.json
  1. You can monitor the training process in ./experiments.

  2. Finally, you can follow the test pipeline to evaluate the model trained by yourself.

Performance

Quantitative Results

Quantitative evaluation under scale factors x2, x3 and x4. The best performance is shown in bold and the second best performance is underlined.

More Qualitative Results (x4)

Acknowledgment

If you find our work useful in your research or publications, please consider citing:

@inproceedings{li2019gmfn,
    author = {Li, Qilei and Li, Zhen and Lu, Lu and Jeon, Gwanggil and Liu, Kai and Yang, Xiaomin},
    title = {Gated Multiple Feedback Network for Image Super-Resolution},
    booktitle = {The British Machine Vision Conference (BMVC)},
    year = {2019}
}

@inproceedings{li2019srfbn,
    author = {Li, Zhen and Yang, Jinglei and Liu, Zheng and Yang, Xiaomin and Jeon, Gwanggil and Wu, Wei},
    title = {Feedback Network for Image Super-Resolution},
    booktitle = {The IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
    year= {2019}
}
You might also like...
Pytorch implementation of our paper under review — Lottery Jackpots Exist in Pre-trained Models

Lottery Jackpots Exist in Pre-trained Models (Paper Link) Requirements Python = 3.7.4 Pytorch = 1.6.1 Torchvision = 0.4.1 Reproduce the Experiment

The repository offers the official implementation of our paper in PyTorch.

Cloth Interactive Transformer (CIT) Cloth Interactive Transformer for Virtual Try-On Bin Ren1, Hao Tang1, Fanyang Meng2, Runwei Ding3, Ling Shao4, Phi

PyTorch implementations for our SIGGRAPH 2021 paper: Editable Free-viewpoint Video Using a Layered Neural Representation.
PyTorch implementations for our SIGGRAPH 2021 paper: Editable Free-viewpoint Video Using a Layered Neural Representation.

st-nerf We provide PyTorch implementations for our paper: Editable Free-viewpoint Video Using a Layered Neural Representation SIGGRAPH 2021 Jiakai Zha

PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation
PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimation

StructDepth PyTorch implementation of our ICCV2021 paper: StructDepth: Leveraging the structural regularities for self-supervised indoor depth estimat

Pytorch implementation for  our ICCV 2021 paper
Pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering".

TRAnsformer Routing Networks (TRAR) This is an official implementation for ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visu

This is the official pytorch implementation for our ICCV 2021 paper
This is the official pytorch implementation for our ICCV 2021 paper "TRAR: Routing the Attention Spans in Transformers for Visual Question Answering" on VQA Task

🌈 ERASOR (RA-L'21 with ICRA Option) Official page of "ERASOR: Egocentric Ratio of Pseudo Occupancy-based Dynamic Object Removal for Static 3D Point C

PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.
PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

PyTorch implementation of our ICCV 2021 paper, Interpretation of Emergent Communication in Heterogeneous Collaborative Embodied Agents.

PyTorch implementation for our NeurIPS 2021 Spotlight paper
PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short-Term Transformer for Online Action Detection".

Long Short-Term Transformer for Online Action Detection Introduction This is a PyTorch implementation for our NeurIPS 2021 Spotlight paper "Long Short

Official PyTorch implemention of our paper "Learning to Rectify for Robust Learning with Noisy Labels".

WarPI The official PyTorch implemention of our paper "Learning to Rectify for Robust Learning with Noisy Labels". Run python main.py --corruption_type

Comments
  • Approximately how many epoches will reach the results in the paper (4x SR result)

    Approximately how many epoches will reach the results in the paper (4x SR result)

    Hi, liqilei After I have run about 700 epoches, the reult on val set is 32.41(highest result). I want to know if my training process seems to be problematic? How long did you reach 32.47 of SRFBN when you were training? How long does it take to reach 32.70? Thank you.

    opened by Senwang98 7
  • train error size not match

    train error size not match

    CUDA_VISIBLE_DEVICES=0 python train.py -opt options/train/train_GMFN.json I use celeba dataset train

    ===> Training Epoch: [1/1000]... Learning Rate: 0.000200 Epoch: [1/1000]: 0%| | 0/251718 [00:00<?, ?it/s] Traceback (most recent call last): File "train.py", line 131, in main() File "train.py", line 69, in main iter_loss = solver.train_step() File "/exp_sr/SRFBN/solvers/SRSolver.py", line 104, in train_step loss_steps = [self.criterion_pix(sr, split_HR) for sr in outputs] File "/exp_sr/SRFBN/solvers/SRSolver.py", line 104, in loss_steps = [self.criterion_pix(sr, split_HR) for sr in outputs] File "/toolscnn/env_pyt0.4_py3.5_awsrn/lib/python3.5/site-packages/torch/nn/modules/module.py", line 477, in call result = self.forward(*input, **kwargs) File "/toolscnn/env_pyt0.4_py3.5_awsrn/lib/python3.5/site-packages/torch/nn/modules/loss.py", line 87, in forward return F.l1_loss(input, target, reduction=self.reduction) File "/toolscnn/env_pyt0.4_py3.5_awsrn/lib/python3.5/site-packages/torch/nn/functional.py", line 1702, in l1_loss input, target, reduction) File "/toolscnn/env_pyt0.4_py3.5_awsrn/lib/python3.5/site-packages/torch/nn/functional.py", line 1674, in _pointwise_loss return lambd_optimized(input, target, reduction) RuntimeError: input and target shapes do not match: input [16 x 3 x 192 x 192], target [16 x 3 x 48 x 48] at /pytorch/aten/src/THCUNN/generic/AbsCriterion.cu:12

    opened by yja1 3
  • Not an Issue

    Not an Issue

    Hey @Paper99,

    Thanks for sharing your code! I wonder if it is possible to help with visualizing featuer-maps as you did in your paper figure 4.

    Thanks

    opened by Auth0rM0rgan 1
  • My training result with scale = 2

    My training result with scale = 2

    Hi, After I have trained the DIV2k, I get the final result(use best_ckp.pth to test):

    set5:38.16/0.9610
    set14:33.91/0.9203
    urban100:32.81/0.9349
    B100:32.30/0.9011
    manga109:39.01/0.9776
    

    It seems much lower than that in your paper.

    opened by Senwang98 6
Owner
Qilei Li
Qilei Li
Reading list for research topics in Masked Image Modeling

awesome-MIM Reading list for research topics in Masked Image Modeling(MIM). We list the most popular methods for MIM, if I missed something, please su

ligang 231 Dec 07, 2022
LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT

LightHuBERT LightHuBERT: Lightweight and Configurable Speech Representation Learning with Once-for-All Hidden-Unit BERT | Github | Huggingface | SUPER

WangRui 46 Dec 29, 2022
Code for Mining the Benefits of Two-stage and One-stage HOI Detection

Status: Archive (code is provided as-is, no updates expected) PPO-EWMA [Paper] This is code for training agents using PPO-EWMA and PPG-EWMA, introduce

OpenAI 33 Dec 15, 2022
Repo for Photon-Starved Scene Inference using Single Photon Cameras, ICCV 2021

Photon-Starved Scene Inference using Single Photon Cameras ICCV 2021 Arxiv Project Video Bhavya Goyal, Mohit Gupta University of Wisconsin-Madison Abs

Bhavya Goyal 5 Nov 15, 2022
Privacy-Preserving Machine Learning (PPML) Tutorial Presented at PyConDE 2022

PPML: Machine Learning on Data you cannot see Repository for the tutorial on Privacy-Preserving Machine Learning (PPML) presented at PyConDE 2022 Abst

Valerio Maggio 10 Aug 16, 2022
A project for developing transformer-based models for clinical relation extraction

Clinical Relation Extration with Transformers Aim This package is developed for researchers easily to use state-of-the-art transformers models for ext

uf-hobi-informatics-lab 101 Dec 19, 2022
Face Mesh is a face geometry solution that estimates 468 3D face landmarks in real-time even on mobile devices

Face-Mesh Face Mesh is a face geometry solution that estimates 468 3D face landmarks in real-time even on mobile devices. It employs machine learning

Farnam Javadi 9 Dec 21, 2022
Python Environment for Bayesian Learning

Pebl is a python library and command line application for learning the structure of a Bayesian network given prior knowledge and observations. Pebl in

Abhik Shah 103 Jul 14, 2022
Source code for our paper "Do Not Trust Prediction Scores for Membership Inference Attacks"

Do Not Trust Prediction Scores for Membership Inference Attacks Abstract: Membership inference attacks (MIAs) aim to determine whether a specific samp

<a href=[email protected]"> 3 Oct 25, 2022
Behind the Curtain: Learning Occluded Shapes for 3D Object Detection

Behind the Curtain: Learning Occluded Shapes for 3D Object Detection Acknowledgement We implement our model, BtcDet, based on [OpenPcdet 0.3.0]. Insta

Qiangeng Xu 163 Dec 19, 2022
Text-to-Music Retrieval using Pre-defined/Data-driven Emotion Embeddings

Text2Music Emotion Embedding Text-to-Music Retrieval using Pre-defined/Data-driven Emotion Embeddings Reference Emotion Embedding Spaces for Matching

Minz Won 50 Dec 05, 2022
A PyTorch Implementation of Single Shot Scale-invariant Face Detector.

S³FD: Single Shot Scale-invariant Face Detector A PyTorch Implementation of Single Shot Scale-invariant Face Detector. Eval python wider_eval_pytorch.

carwin 235 Jan 07, 2023
The PyTorch implementation for paper "Neural Texture Extraction and Distribution for Controllable Person Image Synthesis" (CVPR2022 Oral)

ArXiv | Get Start Neural-Texture-Extraction-Distribution The PyTorch implementation for our paper "Neural Texture Extraction and Distribution for Cont

Ren Yurui 111 Dec 10, 2022
Validated, scalable, community developed variant calling, RNA-seq and small RNA analysis

Validated, scalable, community developed variant calling, RNA-seq and small RNA analysis. You write a high level configuration file specifying your in

Blue Collar Bioinformatics 917 Jan 03, 2023
SNIPS: Solving Noisy Inverse Problems Stochastically

SNIPS: Solving Noisy Inverse Problems Stochastically This repo contains the official implementation for the paper SNIPS: Solving Noisy Inverse Problem

Bahjat Kawar 35 Nov 09, 2022
Tree Nested PyTorch Tensor Lib

DI-treetensor treetensor is a generalized tree-based tensor structure mainly developed by OpenDILab Contributors. Almost all the operation can be supp

OpenDILab 167 Dec 29, 2022
Source code for the paper "PLOME: Pre-training with Misspelled Knowledge for Chinese Spelling Correction" in ACL2021

PLOME:Pre-training with Misspelled Knowledge for Chinese Spelling Correction (ACL2021) This repository provides the code and data of the work in ACL20

197 Nov 26, 2022
Unofficial implementation of the paper: PonderNet: Learning to Ponder in TensorFlow

PonderNet-TensorFlow This is an Unofficial Implementation of the paper: PonderNet: Learning to Ponder in TensorFlow. Official PyTorch Implementation:

1 Oct 23, 2022
“Data Augmentation for Cross-Domain Named Entity Recognition” (EMNLP 2021)

Data Augmentation for Cross-Domain Named Entity Recognition Authors: Shuguang Chen, Gustavo Aguilar, Leonardo Neves and Thamar Solorio This repository

<a href=[email protected]"> 18 Sep 10, 2022
Cross-platform-profile-pic-changer - Script to change profile pictures across multiple platforms

cross-platform-profile-pic-changer script to change profile pictures across mult

4 Jan 17, 2022