Implementation for "Seamless Manga Inpainting with Semantics Awareness" (SIGGRAPH 2021 issue)

Overview

Seamless Manga Inpainting with Semantics Awareness

[SIGGRAPH 2021](To appear) | Project Website | BibTex

Introduction:

Manga inpainting fills up the disoccluded pixels due to the removal of dialogue balloons or ``sound effect'' text. This process is long needed by the industry for the language localization and the conversion to animated manga. It is mostly done manually, as existing methods (mostly for natural image inpainting) cannot produce satisfying results. We present the first manga inpainting method, a deep learning model, that generates high-quality results. Instead of direct inpainting, we propose to separate the complicated inpainting into two major phases, semantic inpainting and appearance synthesis. This separation eases both the feature understanding and hence the training of the learning model. A key idea is to disentangle the structural line and screentone, that helps the network to better distinguish the structural line and the screentone features for semantic interpretation. Detailed description of the system can be found in our [paper](To appear).

Example Results

Belows shows an example of our inpainted manga image. Our method automatically fills up the disoccluded regions with meaningful structural lines and seamless screentones. Example

Prerequisites

  • Python 3.6
  • PyTorch 1.2
  • NVIDIA GPU + CUDA cuDNN

Installation

  • Clone this repo:
git clone https://github.com/msxie92/MangaInpainting.git
cd MangaInpainting
pip install -r requirements.txt

Datasets

1) Images

As most of our training manga images are under copyright. We recommend you to use restored Manga109 dataset. Please download datasets from official websites and then use Manga Restoration to restored the bitonal nature. Please use a larger resolution instead of the predicted one to tolerant the prediction error. Exprically, set scale>1.4.

2) Structural lines

Our model is trained on structural lines extracted by Li et al.. You can download their publically available testing code.

3) Masks

Our model is trained on both regular masks (randomly generated rectangle masks) and irregular masks (provided by Liu et al.). You can download publically available Irregular Mask Dataset from their website. Alternatively, you can download Quick Draw Irregular Mask Dataset by Karim Iskakov which is combination of 50 million strokes drawn by human hand.

Getting Started

Download the pre-trained models using the following links and copy them under ./checkpoints directory.

MangaInpainting

ScreenVAE

Testing

To test the model, create a config.yaml file similar to the example config file and copy it under your checkpoints directory.

In each case, you need to provide an input image (image with a mask) and a mask file. Please make sure that the mask file covers the entire mask region in the input image. To test the model:

python test.py --checkpoints [path to checkpoints] \
      --input [path to the output directory]\
      --mask [path to the output directory]\
      --line [path to the output directory]\
      --output [path to the output directory]

We provide some test examples under ./examples directory. Please download the pre-trained models and run:

python test.py --checkpoints ./checkpoints/mangainpaintor \
      --input examples/test/imgs/ \
      --mask examples/test/masks/ \
      --line examples/test/lines/ \
      --output examples/test/results/

This script will inpaint all images in ./examples/manga/imgs using their corresponding masks in ./examples/manga/mask directory and saves the results in ./checkpoints/results directory.

Model Configuration

The model configuration is stored in a config.yaml file under your checkpoints directory.

Citation

If any part of our paper and code is helpful to your work, please generously cite with:

@inproceedings{xie2021seamless,
	title    ={Seamless Manga Inpainting with Semantics Awareness},
	author   ={Minshan Xie and Menghan Xia and Xueting Liu and Chengze Li and Tien-Tsin Wong},
	journal  = {ACM Transactions on Graphics (SIGGRAPH 2021 issue)},
	month    = {August},
	year     = {2021},
	volume   = {40},
        number   = {4},
        pages    = {96:1--96:11}
}

Reference

Dense Gaussian Processes for Few-Shot Segmentation

DGPNet - Dense Gaussian Processes for Few-Shot Segmentation Welcome to the public repository for DGPNet. The paper is available at arxiv: https://arxi

37 Jan 07, 2023
CVPR2021 Content-Aware GAN Compression

Content-Aware GAN Compression [ArXiv] Paper accepted to CVPR2021. @inproceedings{liu2021content, title = {Content-Aware GAN Compression}, auth

52 Nov 06, 2022
Pytorch implementation of our paper under review -- 1xN Pattern for Pruning Convolutional Neural Networks

1xN Pattern for Pruning Convolutional Neural Networks (paper) . This is Pytorch re-implementation of "1xN Pattern for Pruning Convolutional Neural Net

Mingbao Lin (林明宝) 29 Nov 29, 2022
Neural network chess engine trained on Gary Kasparov's games.

Neural Chess It's not the best chess engine, but it is a chess engine. Proof of concept neural network chess engine (feed-forward multi-layer perceptr

3 Jun 22, 2022
Pytorch implementation for "Density-aware Chamfer Distance as a Comprehensive Metric for Point Cloud Completion" (NeurIPS 2021)

Density-aware Chamfer Distance This repository contains the official PyTorch implementation of our paper: Density-aware Chamfer Distance as a Comprehe

Tong WU 93 Dec 15, 2022
A Pytorch Implementation of [Source data‐free domain adaptation of object detector through domain

A Pytorch Implementation of Source data‐free domain adaptation of object detector through domain‐specific perturbation Please follow Faster R-CNN and

1 Dec 25, 2021
External Attention Network

Beyond Self-attention: External Attention using Two Linear Layers for Visual Tasks paper : https://arxiv.org/abs/2105.02358 Jittor code will come soon

MenghaoGuo 357 Dec 11, 2022
This is a Keras-based Python implementation of DeepMask- a complex deep neural network for learning object segmentation masks

NNProject - DeepMask This is a Keras-based Python implementation of DeepMask- a complex deep neural network for learning object segmentation masks. Th

189 Nov 16, 2022
Photo2cartoon - 人像卡通化探索项目 (photo-to-cartoon translation project)

人像卡通化 (Photo to Cartoon) 中文版 | English Version 该项目为小视科技卡通肖像探索项目。您可使用微信扫描下方二维码或搜索“AI卡通秀”小程序体验卡通化效果。

Minivision_AI 3.5k Dec 30, 2022
MINOS: Multimodal Indoor Simulator

MINOS Simulator MINOS is a simulator designed to support the development of multisensory models for goal-directed navigation in complex indoor environ

194 Dec 27, 2022
Memory-efficient optimum einsum using opt_einsum planning and PyTorch kernels.

opt-einsum-torch There have been many implementations of Einstein's summation. numpy's numpy.einsum is the least efficient one as it only runs in sing

Haoyan Huo 9 Nov 18, 2022
PERIN is Permutation-Invariant Semantic Parser developed for MRP 2020

PERIN: Permutation-invariant Semantic Parsing David Samuel & Milan Straka Charles University Faculty of Mathematics and Physics Institute of Formal an

ÚFAL 40 Jan 04, 2023
A hifiasm fork for metagenome assembly using Hifi reads.

hifiasm_meta - de novo metagenome assembler, based on hifiasm, a haplotype-resolved de novo assembler for PacBio Hifi reads.

44 Jul 10, 2022
Vision Transformer for 3D medical image registration (Pytorch).

ViT-V-Net: Vision Transformer for Volumetric Medical Image Registration keywords: vision transformer, convolutional neural networks, image registratio

Junyu Chen 192 Dec 20, 2022
Noise Conditional Score Networks (NeurIPS 2019, Oral)

Generative Modeling by Estimating Gradients of the Data Distribution This repo contains the official implementation for the NeurIPS 2019 paper Generat

451 Dec 26, 2022
Contrastive Feature Loss for Image Prediction

Contrastive Feature Loss for Image Prediction We provide a PyTorch implementation of our contrastive feature loss presented in: Contrastive Feature Lo

Alex Andonian 44 Oct 05, 2022
Pytorch Lightning Distributed Accelerators using Ray

Distributed PyTorch Lightning Training on Ray This library adds new PyTorch Lightning plugins for distributed training using the Ray distributed compu

167 Jan 02, 2023
Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network.

Dewarping Document Image By Displacement Flow Estimation with Fully Convolutional Network

111 Dec 27, 2022
NLP From Scratch Without Large-Scale Pretraining: A Simple and Efficient Framework

NLP From Scratch Without Large-Scale Pretraining This repository contains the code, pre-trained model checkpoints and curated datasets for our paper:

Xingcheng Yao 224 Dec 08, 2022
disentanglement_lib is an open-source library for research on learning disentangled representations.

disentanglement_lib disentanglement_lib is an open-source library for research on learning disentangled representation. It supports a variety of diffe

Google Research 1.3k Dec 28, 2022