PyTorch implementaton of our CVPR 2021 paper "Bridging the Visual Gap: Wide-Range Image Blending"

Overview

Bridging the Visual Gap: Wide-Range Image Blending

PyTorch implementaton of our CVPR 2021 paper "Bridging the Visual Gap: Wide-Range Image Blending".
You can visit our project website here.

In this paper, we propose a novel model to tackle the problem of wide-range image blending, which aims to smoothly merge two different images into a panorama by generating novel image content for the intermediate region between them.

Paper

Bridging the Visual Gap: Wide-Range Image Blending
Chia-Ni Lu, Ya-Chu Chang, Wei-Chen Chiu
IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2021.

Please cite our paper if you find it useful for your research.

@InProceedings{lu2021bridging,
    author = {Lu, Chia-Ni and Chang, Ya-Chu and Chiu, Wei-Chen},
    title = {Bridging the Visual Gap: Wide-Range Image Blending},
    booktitle = {IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)},
    month = {June},
    year = {2021}
}

Installation

  • This code was developed with Python 3.7.4 & Pytorch 1.0.0 & CUDA 9.2
  • Other requirements: numpy, skimage, tensorboardX
  • Clone this repo
git clone https://github.com/julia0607/Wide-Range-Image-Blending.git
cd Wide-Range-Image-Blending

Testing

Download our pre-trained model weights from here and put them under weights/.

Test the sample data provided in this repo:

python test.py

Or download our paired test data from here and put them under data/.
Then run the testing code:

python test.py --test_data_dir_1 ./data/scenery6000_paired/test/input1/
               --test_data_dir_2 ./data/scenery6000_paired/test/input2/

Run your own data:

python test.py --test_data_dir_1 YOUR_DATA_PATH_1
               --test_data_dir_2 YOUR_DATA_PATH_2
               --save_dir YOUR_SAVE_PATH

If your test data isn't paired already, add --rand_pair True to randomly pair the data.

Training

We adopt the scenery dataset proposed by Very Long Natural Scenery Image Prediction by Outpainting for conducting our experiments, in which we split the dataset to 5040 training images and 1000 testing images.

Download the dataset with our split of train and test set from here and put them under data/.
You can unzip the .zip file with jar xvf scenery6000_split.zip.
Then run the training code for self-reconstruction stage (first stage):

python train_SR.py

After finishing the training of self-reconstruction stage, move the latest model weights from checkpoints/SR_Stage/ to weights/, and run the training code for fine-tuning stage (second stage):

python train_FT.py --load_pretrain True

Train the model with your own dataset:

python train_SR.py --train_data_dir YOUR_DATA_PATH

After finishing the training of self-reconstruction stage, move the latest model weights to weights/, and run the training code for fine-tuning stage (second stage):

python train_FT.py --load_pretrain True
                   --train_data_dir YOUR_DATA_PATH

If your train data isn't paired already, add --rand_pair True to randomly pair the data in the fine-tuning stage.

TensorBoard Visualization

Visualization on TensorBoard for training and validation is supported. Run tensorboard --logdir YOUR_LOG_DIR to view training progress.

Acknowledgments

Our code is partially based on Very Long Natural Scenery Image Prediction by Outpainting and a pytorch re-implementation for Generative Image Inpainting with Contextual Attention.
The implementation of ID-MRF loss is borrowed from Image Inpainting via Generative Multi-column Convolutional Neural Networks.

Owner
Chia-Ni Lu
Chia-Ni Lu
GAN-STEM-Conv2MultiSlice - Exploring Generative Adversarial Networks for Image-to-Image Translation in STEM Simulation

GAN-STEM-Conv2MultiSlice GAN method to help covert lower resolution STEM images generated by convolution methods to higher resolution STEM images gene

UW-Madison Computational Materials Group 2 Feb 10, 2021
Implementation of Fast Transformer in Pytorch

Fast Transformer - Pytorch Implementation of Fast Transformer in Pytorch. This only work as an encoder. Yannic video AI Epiphany Install $ pip install

Phil Wang 167 Dec 27, 2022
CRISCE: Automatically Generating Critical Driving Scenarios From Car Accident Sketches

CRISCE: Automatically Generating Critical Driving Scenarios From Car Accident Sketches This document describes how to install and use CRISCE (CRItical

Chair of Software Engineering II, Uni Passau 2 Feb 09, 2022
SFD implement with pytorch

S³FD: Single Shot Scale-invariant Face Detector A PyTorch Implementation of Single Shot Scale-invariant Face Detector Description Meanwhile train hand

Jun Li 251 Dec 22, 2022
Code for CVPR2021 paper "Learning Salient Boundary Feature for Anchor-free Temporal Action Localization"

AFSD: Learning Salient Boundary Feature for Anchor-free Temporal Action Localization This is an official implementation in PyTorch of AFSD. Our paper

Tencent YouTu Research 146 Dec 24, 2022
a pytorch implementation of auto-punctuation learned character by character

Learning Auto-Punctuation by Reading Engadget Articles Link to Other of my work 🌟 Deep Learning Notes: A collection of my notes going from basic mult

Ge Yang 137 Nov 09, 2022
PyVideoAI: Action Recognition Framework

This reposity contains official implementation of: Capturing Temporal Information in a Single Frame: Channel Sampling Strategies for Action Recognitio

Kiyoon Kim 22 Dec 29, 2022
Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression

Alpha-IoU: A Family of Power Intersection over Union Losses for Bounding Box Regression YOLOv5 with alpha-IoU losses implemented in PyTorch. Example r

Jacobi(Jiabo He) 147 Dec 05, 2022
BirdCLEF 2021 - Birdcall Identification 4th place solution

BirdCLEF 2021 - Birdcall Identification 4th place solution My solution detail kaggle discussion Inference Notebook (best submission) Environment Use K

tattaka 42 Jan 02, 2023
CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped

CSWin-Transformer This repo is the official implementation of "CSWin Transformer: A General Vision Transformer Backbone with Cross-Shaped Windows". Th

Microsoft 409 Jan 06, 2023
DECA: Detailed Expression Capture and Animation (SIGGRAPH 2021)

DECA: Detailed Expression Capture and Animation (SIGGRAPH2021) input image, aligned reconstruction, animation with various poses & expressions This is

Yao Feng 1.5k Jan 02, 2023
Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation

Look Closer: Bridging Egocentric and Third-Person Views with Transformers for Robotic Manipulation Official PyTorch implementation for the paper Look

Rishabh Jangir 20 Nov 24, 2022
Rendering Point Clouds with Compute Shaders

Compute Shader Based Point Cloud Rendering This repository contains the source code to our techreport: Rendering Point Clouds with Compute Shaders and

Markus Schütz 460 Jan 05, 2023
Unofficial implementation of the paper: PonderNet: Learning to Ponder in TensorFlow

PonderNet-TensorFlow This is an Unofficial Implementation of the paper: PonderNet: Learning to Ponder in TensorFlow. Official PyTorch Implementation:

1 Oct 23, 2022
Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)'

SCL Introduction Code for 'Self-Guided and Cross-Guided Learning for Few-shot segmentation. (CVPR' 2021)' We evaluated our approach using two baseline

34 Oct 08, 2022
Objax Apache-2Objax (🥉19 · ⭐ 580) - Objax is a machine learning framework that provides an Object.. Apache-2 jax

Objax Tutorials | Install | Documentation | Philosophy This is not an officially supported Google product. Objax is an open source machine learning fr

Google 729 Jan 02, 2023
Official repository with code and data accompanying the NAACL 2021 paper "Hurdles to Progress in Long-form Question Answering" (https://arxiv.org/abs/2103.06332).

Hurdles to Progress in Long-form Question Answering This repository contains the official scripts and datasets accompanying our NAACL 2021 paper, "Hur

Kalpesh Krishna 41 Nov 08, 2022
Twins: Revisiting the Design of Spatial Attention in Vision Transformers

Twins: Revisiting the Design of Spatial Attention in Vision Transformers Very recently, a variety of vision transformer architectures for dense predic

482 Dec 18, 2022
Get 2D point positions (e.g., facial landmarks) projected on 3D mesh

points2d_projection_mesh Input 2D points (e.g. facial landmarks) on an image Camera parameters (extrinsic and intrinsic) of the image Aligned 3D mesh

5 Dec 08, 2022
PyTorch implementation of paper “Unbiased Scene Graph Generation from Biased Training”

A new codebase for popular Scene Graph Generation methods (2020). Visualization & Scene Graph Extraction on custom images/datasets are provided. It's also a PyTorch implementation of paper “Unbiased

Kaihua Tang 824 Jan 03, 2023