TSIT: A Simple and Versatile Framework for Image-to-Image Translation

Overview

TSIT: A Simple and Versatile Framework for Image-to-Image Translation

teaser

This repository provides the official PyTorch implementation for the following paper:

TSIT: A Simple and Versatile Framework for Image-to-Image Translation
Liming Jiang, Changxu Zhang, Mingyang Huang, Chunxiao Liu, Jianping Shi and Chen Change Loy
In ECCV 2020 (Spotlight).
Paper

Abstract: We introduce a simple and versatile framework for image-to-image translation. We unearth the importance of normalization layers, and provide a carefully designed two-stream generative model with newly proposed feature transformations in a coarse-to-fine fashion. This allows multi-scale semantic structure information and style representation to be effectively captured and fused by the network, permitting our method to scale to various tasks in both unsupervised and supervised settings. No additional constraints (e.g., cycle consistency) are needed, contributing to a very clean and simple method. Multi-modal image synthesis with arbitrary style control is made possible. A systematic study compares the proposed method with several state-of-the-art task-specific baselines, verifying its effectiveness in both perceptual quality and quantitative evaluations.

Updates

  • [01/2021] The code of TSIT is released.

  • [07/2020] The paper of TSIT is accepted by ECCV 2020 (Spotlight).

Installation

After installing Anaconda, we recommend you to create a new conda environment with python 3.7.6:

conda create -n tsit python=3.7.6 -y
conda activate tsit

Clone this repo, install PyTorch 1.1.0 (newer versions may also work) and other dependencies:

git clone https://github.com/EndlessSora/TSIT.git
cd TSIT
pip install -r requirements.txt

This code also requires the Synchronized-BatchNorm-PyTorch:

cd models/networks/
git clone https://github.com/vacancy/Synchronized-BatchNorm-PyTorch
cp -rf Synchronized-BatchNorm-PyTorch/sync_batchnorm .
rm -rf Synchronized-BatchNorm-PyTorch
cd ../../

Tasks and Datasets

The code covers 3 image-to-image translation tasks on 5 datasets. For more details, please refer to our paper.

Task Abbreviations

  • Arbitrary Style Transfer (AST) on Yosemite summer → winter, BDD100K day → night, and Photo → art datasets.
  • Semantic Image Synthesis (SIS) on Cityscapes and ADE20K datasets.
  • Multi-Modal Image Synthesis (MMIS) on BDD100K sunny → different time/weather conditions dataset.

The abbreviations are used to specify the --task argument when training and testing.

Dataset Preparation

We provide one-click scripts to prepare datasets. The details are provided below.

  • Yosemite summer → winter and Photo → art. The provided scripts will make all things ready (including the download). For example, simply run:
bash datasets/prepare_summer2winteryosemite.sh
  • BDD100K. Please first download BDD100K Images on their official website. We have provided the classified lists of different weathers and times. After downloading, you only need to run:
bash datasets/prepare_bdd100k.sh [data_root]

The [data_root] should be specified, which is the path to the BDD100K root folder that contains images folder. The script will put the list to the suitable place and symlink the root folder to ./datasets.

  • Cityscapes. Please follow the standard download and preparation guidelines on the official website. We recommend to symlink its root folder [data_root] to ./datasets by:
bash datasets/prepare_cityscapes.sh [data_root]
  • ADE20K. The dataset can be downloaded here, which is from MIT Scene Parsing BenchMark. After unzipping the dataset, put the jpg image files ADEChallengeData2016/images/ and png label files ADEChallengeData2016/annotatoins/ in the same directory. We also recommend to symlink its root folder [data_root] to ./datasets by:
bash datasets/prepare_ade20k.sh [data_root]

Testing Pretrained Models

  1. Download the pretrained models and unzip them to ./checkpoints.

  2. For a quick start, we have provided all the example test scripts. After preparing the corresponding datasets, you can directly use the test scripts. For example:

bash test_scripts/ast_summer2winteryosemite.sh
  1. The generated images will be saved at ./results/[experiment_name] by default.

  2. You can use --results_dir to specify the output directory. --how_many will specify the maximum number of images to generate. By default, the code loads the latest checkpoint, which can be changed using --which_epoch. You can also discard --show_input to show the generated images only without the input references.

  3. For MMIS sunny → different time/weather conditions, the --test_mode can be specified (optional): night | cloudy | rainy | snowy | all (default).

Training

For a quick start, we have provided all the example training scripts. After preparing the corresponding datasets, you can directly use the training scripts. For example:

bash train_scripts/ast_summer2winteryosemite.sh

Please note that you may want to change the experiment name --name or the checkpoint saving root --checkpoints_dir to prevent your newly trained models overwriting the pretrained ones (if used).

--task is given using the abbreviations. --dataset_mode specifies the dataset type. --croot and --sroot specify the content and style data root, respectively. The results may be better reproduced on NVIDIA Tesla V100 GPUs.

After training, testing the newly trained models is similar to testing pretrained models.

Citation

If you find this work useful for your research, please cite our paper:

@inproceedings{jiang2020tsit,
  title={{TSIT}: A Simple and Versatile Framework for Image-to-Image Translation},
  author={Jiang, Liming and Zhang, Changxu and Huang, Mingyang and Liu, Chunxiao and Shi, Jianping and Loy, Chen Change},
  booktitle={ECCV},
  year={2020}
}

Acknowledgments

The code is greatly inspired by SPADE, pytorch-AdaIN, and Synchronized-BatchNorm-PyTorch.

License

Copyright (c) 2020. All rights reserved.

The code is licensed under the CC BY-NC-SA 4.0 (Attribution-NonCommercial-ShareAlike 4.0 International).

Owner
Liming Jiang
Ph.D. student, [email protected]
Liming Jiang
Code basis for the paper "Camera Condition Monitoring and Readjustment by means of Noise and Blur" (2021)

Camera Condition Monitoring and Readjustment by means of Noise and Blur This repository contains the source code of the paper: Wischow, M., Gallego, G

7 Dec 22, 2022
Accepted at ICCV-2021: Workshop on Computer Vision for Automated Medical Diagnosis (CVAMD)

Is it Time to Replace CNNs with Transformers for Medical Images? Accepted at ICCV-2021: Workshop on Computer Vision for Automated Medical Diagnosis (C

Christos Matsoukas 80 Dec 27, 2022
An open-source Kazakh named entity recognition dataset (KazNERD), annotation guidelines, and baseline NER models.

Kazakh Named Entity Recognition This repository contains an open-source Kazakh named entity recognition dataset (KazNERD), named entity annotation gui

ISSAI 9 Dec 23, 2022
A pure PyTorch implementation of the loss described in "Online Segment to Segment Neural Transduction"

ssnt-loss ℹ️ This is a WIP project. the implementation is still being tested. A pure PyTorch implementation of the loss described in "Online Segment t

張致強 1 Feb 09, 2022
Feature extraction made simple with torchextractor

torchextractor: PyTorch Intermediate Feature Extraction Introduction Too many times some model definitions get remorselessly copy-pasted just because

Antoine Broyelle 89 Oct 31, 2022
A PyTorch implementation of "Capsule Graph Neural Network" (ICLR 2019).

CapsGNN ⠀⠀ A PyTorch implementation of Capsule Graph Neural Network (ICLR 2019). Abstract The high-quality node embeddings learned from the Graph Neur

Benedek Rozemberczki 1.2k Jan 02, 2023
Offline Reinforcement Learning with Implicit Q-Learning

Offline Reinforcement Learning with Implicit Q-Learning This repository contains the official implementation of Offline Reinforcement Learning with Im

Ilya Kostrikov 126 Jan 06, 2023
Iterative Normalization: Beyond Standardization towards Efficient Whitening

IterNorm Code for reproducing the results in the following paper: Iterative Normalization: Beyond Standardization towards Efficient Whitening Lei Huan

Lei Huang 21 Dec 27, 2022
Multi-Objective Loss Balancing for Physics-Informed Deep Learning

Multi-Objective Loss Balancing for Physics-Informed Deep Learning Code for ReLoBRaLo. Abstract Physics Informed Neural Networks (PINN) are algorithms

Rafael Bischof 16 Dec 12, 2022
Fast Neural Representations for Direct Volume Rendering

Fast Neural Representations for Direct Volume Rendering Sebastian Weiss, Philipp Hermüller, Rüdiger Westermann This repository contains the code and s

Sebastian Weiss 20 Dec 03, 2022
Nsdf: A mesh SDF with just some code we can directly paste into our raymarcher

nsdf Representing SDFs of arbitrary meshes has been a bit tricky so far. Express

Jan Ivanecky 5 Feb 18, 2022
Repository for "Space-Time Correspondence as a Contrastive Random Walk" (NeurIPS 2020)

Space-Time Correspondence as a Contrastive Random Walk This is the repository for Space-Time Correspondence as a Contrastive Random Walk, published at

A. Jabri 239 Dec 27, 2022
Code for Generating Disentangled Arguments with Prompts: A Simple Event Extraction Framework that Works

GDAP Code for Generating Disentangled Arguments with Prompts: A Simple Event Extraction Framework that Works Environment Python (verified: v3.8) CUDA

45 Oct 29, 2022
Code for Understanding Pooling in Graph Neural Networks

Select, Reduce, Connect This repository contains the code used for the experiments of: "Understanding Pooling in Graph Neural Networks" Setup Install

Daniele Grattarola 37 Dec 13, 2022
Data and extra materials for the food safety publications classifier

Data and extra materials for the food safety publications classifier The subdirectories contain detailed descriptions of their contents in the README.

1 Jan 20, 2022
CondenseNet: Light weighted CNN for mobile devices

CondenseNets This repository contains the code (in PyTorch) for "CondenseNet: An Efficient DenseNet using Learned Group Convolutions" paper by Gao Hua

Shichen Liu 690 Nov 30, 2022
An automated facial recognition based attendance system (desktop application)

Facial_Recognition_based_Attendance_System An automated facial recognition based attendance system (desktop application) Made using Python, Tkinter an

1 Jun 21, 2022
CLIP2Video: Mastering Video-Text Retrieval via Image CLIP

CLIP2Video: Mastering Video-Text Retrieval via Image CLIP The implementation of paper CLIP2Video: Mastering Video-Text Retrieval via Image CLIP. CLIP2

168 Dec 29, 2022
git《FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding》(CVPR 2021) GitHub: [fig8]

FSCE: Few-Shot Object Detection via Contrastive Proposal Encoding (CVPR 2021) This repo contains the implementation of our state-of-the-art fewshot ob

233 Dec 29, 2022
《Fst Lerning of Temporl Action Proposl vi Dense Boundry Genertor》(AAAI 2020)

Update 2020.03.13: Release tensorflow-version and pytorch-version DBG complete code. 2019.11.12: Release tensorflow-version DBG inference code. 2019.1

Tencent 338 Dec 16, 2022