Image Super-Resolution by Neural Texture Transfer

Related tags

Deep LearningSRNTT
Overview

SRNTT: Image Super-Resolution by Neural Texture Transfer

Tensorflow implementation of the paper Image Super-Resolution by Neural Texture Transfer accepted in CVPR 2019. This is a simplified version, where the reference images are used without augmentation, e.g., rotation and scaling.

Project Page

Pytorch Implementation

Contents

Pre-requisites

  • Python 3.6
  • TensorFlow 1.13.1
  • requests 2.21.0
  • pillow 5.4.1
  • matplotlib 3.0.2

Tested on MacOS (Mojave).

Dataset

This repo only provides a small training set of ten input-reference pairs for demo purpose. The input images and reference images are stored in data/train/CUFED/input and data/train/CUFED/ref, respectively. Corresponding input and refernece images are with the same file name. To speed up the training process, patch matching and swapping are performed offline, and the swapped feature maps will be saved to data/train/CUFED/map_321 (see offline_patchMatch_textureSwap.py for more details). If you want to train your own model, please prepare your own training set or download either of the following demo training sets:

11,485 input-reference pairs (size 320x320) extracted from DIV2K.

Each pair is extracted from the same image without overlap but considering scaling and rotation.

$ python download_dataset.py --dataset_name DIV2K
11,871 input-reference pairs (size 160x160) extracted from CUFED.

Each pair is extracted from the similar images, including five degrees of similarity.

$ python download_dataset.py --dataset_name CUFED

This repo includes one grounp of samples from the CUFED5 dataset, where each input image corresponds to five reference images (different from the paper) with different degrees of similarity to the input image. Please download the full dataset by

$ python download_dataset.py --dataset_name CUFED5

Easy Testing

$ sh test.sh

The results will be save to the folder demo_testing_srntt, including the following 6 images:

  • [1/6] HR.png, the original image.

    Original image

  • [2/6] LR.png, the low-resolution (LR) image, downscaling factor 4x.

    LR image

  • [3/6] Bicubic.png, the upscaled image by bicubic interpolation, upscaling factor 4x.

    Bicubic image

  • [4/6] Ref_XX.png, the reference images, indexed by XX.

    Reference image

  • [5/6] Upscale.png, the upscaled image by a pre-trained SR network, upscaling factor 4x.

    Upscaled image

  • [6/6] SRNTT.png, the SR result by SRNTT, upscaling factor 4x.

    Upscaled image

Custom Testing

$ python main.py 
    --is_train              False 
    --input_dir             path/to/input/image/file
    --ref_dir               path/to/ref/image/file
    --result_dir            path/to/result/folder
    --ref_scale             default 1, expected_ref_scale divided by original_ref_scale
    --is_original_image     default True, whether input is original 
    --use_init_model_only   default False, whether use init model, trained with reconstruction loss only
    --use_weight_map        defualt False, whether use weighted model, trained with the weight map.
    --save_dir              path/to/a/specified/model if it exists, otherwise ignor this parameter

Please note that this repo provides two types of pre-trained SRNTT models in SRNTT/models/SRNTT:

  • srntt.npz is trained by all losses, i.e., reconstruction loss, perceptual loss, texture loss, and adversarial loss.
  • srntt_init.npz is trained by only the reconstruction loss, corresponding to SRNTT-l2 in the paper.

To switch between the demo models, please set --use_init_model_only to decide whether use srntt_init.npz.

Easy Training

$ sh train.sh

The CUFED training set will be downloaded automatically. To speed up the training process, patch matching and swapping are conducted to get the swapped feature maps in an offline manner. The models will be saved to demo_training_srntt/model, and intermediate samples will be saved to demo_training_srntt/sample. Parameter settings are save to demo_training_srntt/arguments.txt.

Custom Training

Please first prepare the input and reference images which are squared patches in the same size. In addition, input and reference images should be stored in separated folders, and the correspoinding input and reference images are with the same file name. Please refer to the data/train/CUFED folder for examples. Then, use offline_patchMatch_textureSwap.py to generate the feature maps in ahead.

$ python main.py
    --is_train True
    --save_dir folder/to/save/models
    --input_dir path/to/input/image/folder
    --ref_dir path/to/ref/image/folder
    --map_dir path/to/feature_map/folder
    --batch_size default 9
    --num_epochs default 100
    --input_size default 40, the size of LR patch, i.e., 1/4 of the HR image, set to 80 for the DIV2K dataset
    --use_weight_map defualt False, whether use the weight map that reduces negative effect 
                     from the reference image but may also decrease the sharpness.  

Please refer to main.py for more parameter settings for training.

Test on the custom training model

$ python main.py 
    --is_train              False 
    --input_dir             path/to/input/image/file
    --ref_dir               path/to/ref/image/file
    --result_dir            path/to/result/folder
    --ref_scale             default 1, expected_ref_scale divided by original_ref_scale
    --is_original_image     default True, whether input is original 
    --save_dir              the same as save_dir in training

Acknowledgement

Thanks to Tensorlayer for facilitating the implementation of this demo code. We have include the Tensorlayer 1.5.0 in SRNTT/tensorlayer.

Contact

Zhifei Zhang

Owner
Zhifei Zhang
Zhifei Zhang
Weakly-supervised semantic image segmentation with CNNs using point supervision

Code for our ECCV paper What's the Point: Semantic Segmentation with Point Supervision. Summary This library is a custom build of Caffe for semantic i

27 Sep 14, 2022
WSDM‘2022: Knowledge Enhanced Sports Game Summarization

Knowledge Enhanced Sports Game Summarization Cooming Soon! :) Data will be released after approval process. Code will be published once the author of

Jiaan Wang 14 Jul 13, 2022
[CVPR 2021] Exemplar-Based Open-Set Panoptic Segmentation Network (EOPSN)

EOPSN: Exemplar-Based Open-Set Panoptic Segmentation Network (CVPR 2021) PyTorch implementation for EOPSN. We propose open-set panoptic segmentation t

Jaedong Hwang 49 Dec 30, 2022
Implementation of H-Transformer-1D, Hierarchical Attention for Sequence Learning

H-Transformer-1D Implementation of H-Transformer-1D, Transformer using hierarchical Attention for sequence learning with subquadratic costs. For now,

Phil Wang 123 Nov 17, 2022
RID-Noise: Towards Robust Inverse Design under Noisy Environments

This is code of RID-Noise. Reproduce RID-Noise Results Toy tasks Please refer to the notebook ridnoise.ipynb to view experiments on three toy tasks. B

Thyrix 2 Nov 23, 2022
IJCAI2020 & IJCV 2020 :city_sunrise: Unsupervised Scene Adaptation with Memory Regularization in vivo

Seg_Uncertainty In this repo, we provide the code for the two papers, i.e., MRNet:Unsupervised Scene Adaptation with Memory Regularization in vivo, IJ

Zhedong Zheng 348 Jan 05, 2023
A lightweight library designed to accelerate the process of training PyTorch models by providing a minimal

A lightweight library designed to accelerate the process of training PyTorch models by providing a minimal, but extensible training loop which is flexible enough to handle the majority of use cases,

Chris Hughes 110 Dec 23, 2022
This repository contains a toolkit for collecting, labeling and tracking object keypoints

This repository contains a toolkit for collecting, labeling and tracking object keypoints. Object keypoints are semantic points in an object's coordinate frame.

ETHZ ASL 13 Dec 12, 2022
[ICCV 2021] Self-supervised Monocular Depth Estimation for All Day Images using Domain Separation

ADDS-DepthNet This is the official implementation of the paper Self-supervised Monocular Depth Estimation for All Day Images using Domain Separation I

LIU_LINA 52 Nov 24, 2022
Learning Energy-Based Models by Diffusion Recovery Likelihood

Learning Energy-Based Models by Diffusion Recovery Likelihood Ruiqi Gao, Yang Song, Ben Poole, Ying Nian Wu, Diederik P. Kingma Paper: https://arxiv.o

Ruiqi Gao 41 Nov 22, 2022
Code of the paper "Deep Human Dynamics Prior" in ACM MM 2021.

Code of the paper "Deep Human Dynamics Prior" in ACM MM 2021. Figure 1: In the process of motion capture (mocap), some joints or even the whole human

Shinny cui 3 Oct 31, 2022
Recognize Handwritten Digits using Deep Learning on the browser itself.

MNIST on the Web An attempt to predict MNIST handwritten digits from my PyTorch model from the browser (client-side) and not from the server, with the

Harjyot Bagga 7 May 28, 2022
Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds (Local-Lip)

Training Certifiably Robust Neural Networks with Efficient Local Lipschitz Bounds (Local-Lip) Introduction TL;DR: We propose an efficient and trainabl

17 Dec 01, 2022
Pacman-AI - AI project designed by UC Berkeley. Designed reflex and minimax agents for the game Pacman.

Pacman AI Jussi Doherty CAP 4601 - Introduction to Artificial Intelligence - Fall 2020 Python version 3.0+ Source of this project This repo contains a

Jussi Doherty 1 Jan 03, 2022
ICLR 2021: Pre-Training for Context Representation in Conversational Semantic Parsing

SCoRe: Pre-Training for Context Representation in Conversational Semantic Parsing This repository contains code for the ICLR 2021 paper "SCoRE: Pre-Tr

Microsoft 28 Oct 02, 2022
Proof-Of-Concept Piano-Drums Music AI Model/Implementation

Rock Piano "When all is one and one is all, that's what it is to be a rock and not to roll." ---Led Zeppelin, "Stairway To Heaven" Proof-Of-Concept Pi

Alex 4 Nov 28, 2021
implementation of paper - You Only Learn One Representation: Unified Network for Multiple Tasks

YOLOR implementation of paper - You Only Learn One Representation: Unified Network for Multiple Tasks To reproduce the results in the paper, please us

Kin-Yiu, Wong 1.8k Jan 04, 2023
RAANet: Range-Aware Attention Network for LiDAR-based 3D Object Detection with Auxiliary Density Level Estimation

RAANet: Range-Aware Attention Network for LiDAR-based 3D Object Detection with Auxiliary Density Level Estimation Anonymous submission Abstract 3D obj

30 Sep 16, 2022
Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight)

Semi-Supervised Semantic Segmentation via Adaptive Equalization Learning, NeurIPS 2021 (Spotlight) Abstract Due to the limited and even imbalanced dat

Hanzhe Hu 99 Dec 12, 2022
[ICCV 2021] Our work presents a novel neural rendering approach that can efficiently reconstruct geometric and neural radiance fields for view synthesis.

MVSNeRF Project page | Paper This repository contains a pytorch lightning implementation for the ICCV 2021 paper: MVSNeRF: Fast Generalizable Radiance

Anpei Chen 529 Dec 30, 2022