Invertible conditional GANs for image editing

Related tags

Deep LearningIcGAN
Overview

Invertible Conditional GANs

A real image is encoded into a latent representation z and conditional information y, and then decoded into a new image. We fix z for every row, and modify y for each column to obtain variations in real samples.

This is the implementation of the IcGAN model proposed in our paper:

Invertible Conditional GANs for image editing. November 2016.

This paper is a summarized and updated version of my master thesis, which you can find here:

Master thesis: Invertible Conditional Generative Adversarial Networks. September 2016.

The baseline used is the Torch implementation of the DCGAN by Radford et al.

  1. Training the model
    1. Face dataset: CelebA
    2. Digit dataset: MNIST
  2. Visualize the results
    1. Reconstruct and modify real images
    2. Swap attributes
    3. Interpolate between faces

Requisites

Please refer to DCGAN torch repository to know the requirements and dependencies to run the code. Additionally, you will need to install the threads and optnet package:

luarocks install threads

luarocks install optnet

In order to interactively display the results, follow these steps.

1. Training the model

Model overview

The IcGAN is trained in four steps.

  1. Train the generator.
  2. Create a dataset of generated images with the generator.
  3. Train the encoder Z to map an image x to a latent representation z with the dataset generated images.
  4. Train the encoder Y to map an image x to a conditional information vector y with the dataset of real images.

All the parameters of the training phase are located in cfg/mainConfig.lua.

There is already a pre-trained model for CelebA available in case you want to skip the training part. Here you can find instructions on how to use it.

1.1 Train with a face dataset: CelebA

Note: for speed purposes, the whole dataset will be loaded into RAM during training time, which requires about 10 GB of RAM. Therefore, 12 GB of RAM is a minimum requirement. Also, the dataset will be stored as a tensor to load it faster, make sure that you have around 25 GB of free space.

Preprocess

mkdir celebA; cd celebA

Download img_align_celeba.zip here under the link "Align&Cropped Images". Also, you will need to download list_attr_celeba.txt from the same link, which is found under Anno folder.

unzip img_align_celeba.zip; cd ..
DATA_ROOT=celebA th data/preprocess_celebA.lua

Now move list_attr_celeba.txt to celebA folder.

mv list_attr_celeba.txt celebA

Training

  • Conditional GAN: parameters are already configured to run CelebA (dataset=celebA, dataRoot=celebA).

     th trainGAN.lua
  • Generate encoder dataset:

     net=[GENERATOR_PATH] outputFolder=celebA/genDataset/ samples=182638 th data/generateEncoderDataset.lua

    (GENERATOR_PATH example: checkpoints/celebA_25_net_G.t7)

  • Train encoder Z:

     datasetPath=celebA/genDataset/ type=Z th trainEncoder.lua
    
  • Train encoder Y:

     datasetPath=celebA/ type=Y th trainEncoder.lua
    

1.2 Train with a digit dataset: MNIST

Preprocess

Download MNIST as a luarocks package: luarocks install mnist

Training

  • Conditional GAN:

     name=mnist dataset=mnist dataRoot=mnist th trainGAN.lua
  • Generate encoder dataset:

     net=[GENERATOR_PATH] outputFolder=mnist/genDataset/ samples=60000 th data/generateEncoderDataset.lua

    (GENERATOR_PATH example: checkpoints/mnist_25_net_G.t7)

  • Train encoder Z:

     datasetPath=mnist/genDataset/ type=Z th trainEncoder.lua
    
  • Train encoder Y:

     datasetPath=mnist type=Y th trainEncoder.lua
    

2 Pre-trained CelebA model:

CelebA model is available for download here. The file includes the generator and both encoders (encoder Z and encoder Y).

3. Visualize the results

For visualizing the results you will need an already trained IcGAN (i.e. a generator and two encoders). The parameters for generating results are in cfg/generateConfig.lua.

3.1 Reconstruct and modify real images

Reconstrucion example

decNet=celeba_24_G.t7 encZnet=celeba_encZ_7.t7 encYnet=celeba_encY_5.t7 loadPath=[PATH_TO_REAL_IMAGES] th generation/reconstructWithVariations.lua

3.2 Swap attributes

Swap attributes

Swap the attribute information between two pairs of faces.

decNet=celeba_24_G.t7 encZnet=celeba_encZ_7.t7 encYnet=celeba_encY_5.t7 im1Path=[IM1] im2Path=[IM2] th generation/attributeTransfer.lua

3.3 Interpolate between faces

Interpolation

decNet=celeba_24_G.t7 encZnet=celeba_encZ_7.t7 encYnet=celeba_encY_5.t7 im1Path=[IM1] im2Path=[IM2] th generation/interpolate.lua

Do you like or use our work? Please cite us as

@inproceedings{Perarnau2016,
  author    = {Guim Perarnau and
               Joost van de Weijer and
               Bogdan Raducanu and
               Jose M. \'Alvarez},
  title     = {{Invertible Conditional GANs for image editing}},
  booktitle   = {NIPS Workshop on Adversarial Training},
  year      = {2016},
}
Owner
Guim
Guim
Modifications of the official PyTorch implementation of StyleGAN3. Let's easily generate images and videos with StyleGAN2/2-ADA/3!

Alias-Free Generative Adversarial Networks (StyleGAN3) Official PyTorch implementation of the NeurIPS 2021 paper Alias-Free Generative Adversarial Net

Diego Porres 185 Dec 24, 2022
Code-free deep segmentation for computational pathology

NoCodeSeg: Deep segmentation made easy! This is the official repository for the manuscript "Code-free development and deployment of deep segmentation

André Pedersen 26 Nov 23, 2022
PROJECT - Az Residential Real Estate Analysis

AZ RESIDENTIAL REAL ESTATE ANALYSIS -Decided on libraries to import. Includes pa

2 Jul 05, 2022
Code for the paper "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition" (Pattern Recognition 2021)

MASTER-PyTorch PyTorch reimplementation of "MASTER: Multi-Aspect Non-local Network for Scene Text Recognition" (Pattern Recognition 2021). This projec

Wenwen Yu 255 Dec 29, 2022
Implementation of ETSformer, state of the art time-series Transformer, in Pytorch

ETSformer - Pytorch Implementation of ETSformer, state of the art time-series Transformer, in Pytorch Install $ pip install etsformer-pytorch Usage im

Phil Wang 121 Dec 30, 2022
Code for Subgraph Federated Learning with Missing Neighbor Generation (NeurIPS 2021)

To run the code Unzip the package to your local directory; Run 'pip install -r requirements.txt' to download required packages; Open file ~/nips_code/

32 Dec 26, 2022
LSTM Neural Networks for Spectroscopic Studies of Type Ia Supernovae

Package Description The difficulties in acquiring spectroscopic data have been a major challenge for supernova surveys. snlstm is developed to provide

7 Oct 11, 2022
Convert openmmlab (not only mmdetection) series model to tensorrt

MMDet to TensorRT This project aims to convert the mmdetection model to TensorRT model end2end. Focus on object detection for now. Mask support is exp

JinTian 4 Dec 17, 2021
Towards Interpretable Deep Metric Learning with Structural Matching

DIML Created by Wenliang Zhao*, Yongming Rao*, Ziyi Wang, Jiwen Lu, Jie Zhou This repository contains PyTorch implementation for paper Towards Interpr

Wenliang Zhao 75 Nov 11, 2022
Efficiently Disentangle Causal Representations

Efficiently Disentangle Causal Representations Install dependency pip install -r requirements.txt Main experiments Causality direction prediction cd

4 Apr 01, 2022
Json2Xml tool will help you convert from json COCO format to VOC xml format in Object Detection Problem.

JSON 2 XML All codes assume running from root directory. Please update the sys path at the beginning of the codes before running. Over View Json2Xml t

Nguyễn Trường Lâu 6 Aug 22, 2022
Source code for Transformer-based Multi-task Learning for Disaster Tweet Categorisation (UCD's participation in TREC-IS 2020A, 2020B and 2021A).

Source code for "UCD participation in TREC-IS 2020A, 2020B and 2021A". *** update at: 2021/05/25 This repo so far relates to the following work: Trans

Congcong Wang 4 Oct 19, 2021
PyTorch package for the discrete VAE used for DALL·E.

Overview [Blog] [Paper] [Model Card] [Usage] This is the official PyTorch package for the discrete VAE used for DALL·E. Installation Before running th

OpenAI 9.5k Jan 05, 2023
HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021)

Code for HDR Video Reconstruction HDR Video Reconstruction: A Coarse-to-fine Network and A Real-world Benchmark Dataset (ICCV 2021) Guanying Chen, Cha

Guanying Chen 64 Nov 19, 2022
An University Project of Quera Web Crawling.

WebCrawlerProject An University Project of Quera Web Crawling. خزشگر اینستاگرام در این پروژه شما باید با استفاده از کتابخانه های زیر یک خزشگر اینستاگر

Mahdi 3 Aug 12, 2022
Code for 1st place solution in Sleep AI Challenge SNU Hospital

Sleep AI Challenge SNU Hospital 2021 Code for 1st place solution for Sleep AI Challenge (Note that the code is not fully organized) Refer to the notio

Saewon Yang 13 Jan 03, 2022
Official code base for the poster "On the use of Cortical Magnification and Saccades as Biological Proxies for Data Augmentation" published in NeurIPS 2021 Workshop (SVRHM)

Self-Supervised Learning (SimCLR) with Biological Plausible Image Augmentations Official code base for the poster "On the use of Cortical Magnificatio

Binxu 8 Aug 17, 2022
Winners of the Facebook Image Similarity Challenge

Winners of the Facebook Image Similarity Challenge

DrivenData 111 Jan 05, 2023
Weakly Supervised Learning of Instance Segmentation with Inter-pixel Relations, CVPR 2019 (Oral)

Weakly Supervised Learning of Instance Segmentation with Inter-pixel Relations The code of: Weakly Supervised Learning of Instance Segmentation with I

Jiwoon Ahn 472 Dec 29, 2022
基于Flask开发后端、VUE开发前端框架,在WEB端部署YOLOv5目标检测模型

基于Flask开发后端、VUE开发前端框架,在WEB端部署YOLOv5目标检测模型

37 Jan 01, 2023