Invertible conditional GANs for image editing

Related tags

Deep LearningIcGAN
Overview

Invertible Conditional GANs

A real image is encoded into a latent representation z and conditional information y, and then decoded into a new image. We fix z for every row, and modify y for each column to obtain variations in real samples.

This is the implementation of the IcGAN model proposed in our paper:

Invertible Conditional GANs for image editing. November 2016.

This paper is a summarized and updated version of my master thesis, which you can find here:

Master thesis: Invertible Conditional Generative Adversarial Networks. September 2016.

The baseline used is the Torch implementation of the DCGAN by Radford et al.

  1. Training the model
    1. Face dataset: CelebA
    2. Digit dataset: MNIST
  2. Visualize the results
    1. Reconstruct and modify real images
    2. Swap attributes
    3. Interpolate between faces

Requisites

Please refer to DCGAN torch repository to know the requirements and dependencies to run the code. Additionally, you will need to install the threads and optnet package:

luarocks install threads

luarocks install optnet

In order to interactively display the results, follow these steps.

1. Training the model

Model overview

The IcGAN is trained in four steps.

  1. Train the generator.
  2. Create a dataset of generated images with the generator.
  3. Train the encoder Z to map an image x to a latent representation z with the dataset generated images.
  4. Train the encoder Y to map an image x to a conditional information vector y with the dataset of real images.

All the parameters of the training phase are located in cfg/mainConfig.lua.

There is already a pre-trained model for CelebA available in case you want to skip the training part. Here you can find instructions on how to use it.

1.1 Train with a face dataset: CelebA

Note: for speed purposes, the whole dataset will be loaded into RAM during training time, which requires about 10 GB of RAM. Therefore, 12 GB of RAM is a minimum requirement. Also, the dataset will be stored as a tensor to load it faster, make sure that you have around 25 GB of free space.

Preprocess

mkdir celebA; cd celebA

Download img_align_celeba.zip here under the link "Align&Cropped Images". Also, you will need to download list_attr_celeba.txt from the same link, which is found under Anno folder.

unzip img_align_celeba.zip; cd ..
DATA_ROOT=celebA th data/preprocess_celebA.lua

Now move list_attr_celeba.txt to celebA folder.

mv list_attr_celeba.txt celebA

Training

  • Conditional GAN: parameters are already configured to run CelebA (dataset=celebA, dataRoot=celebA).

     th trainGAN.lua
  • Generate encoder dataset:

     net=[GENERATOR_PATH] outputFolder=celebA/genDataset/ samples=182638 th data/generateEncoderDataset.lua

    (GENERATOR_PATH example: checkpoints/celebA_25_net_G.t7)

  • Train encoder Z:

     datasetPath=celebA/genDataset/ type=Z th trainEncoder.lua
    
  • Train encoder Y:

     datasetPath=celebA/ type=Y th trainEncoder.lua
    

1.2 Train with a digit dataset: MNIST

Preprocess

Download MNIST as a luarocks package: luarocks install mnist

Training

  • Conditional GAN:

     name=mnist dataset=mnist dataRoot=mnist th trainGAN.lua
  • Generate encoder dataset:

     net=[GENERATOR_PATH] outputFolder=mnist/genDataset/ samples=60000 th data/generateEncoderDataset.lua

    (GENERATOR_PATH example: checkpoints/mnist_25_net_G.t7)

  • Train encoder Z:

     datasetPath=mnist/genDataset/ type=Z th trainEncoder.lua
    
  • Train encoder Y:

     datasetPath=mnist type=Y th trainEncoder.lua
    

2 Pre-trained CelebA model:

CelebA model is available for download here. The file includes the generator and both encoders (encoder Z and encoder Y).

3. Visualize the results

For visualizing the results you will need an already trained IcGAN (i.e. a generator and two encoders). The parameters for generating results are in cfg/generateConfig.lua.

3.1 Reconstruct and modify real images

Reconstrucion example

decNet=celeba_24_G.t7 encZnet=celeba_encZ_7.t7 encYnet=celeba_encY_5.t7 loadPath=[PATH_TO_REAL_IMAGES] th generation/reconstructWithVariations.lua

3.2 Swap attributes

Swap attributes

Swap the attribute information between two pairs of faces.

decNet=celeba_24_G.t7 encZnet=celeba_encZ_7.t7 encYnet=celeba_encY_5.t7 im1Path=[IM1] im2Path=[IM2] th generation/attributeTransfer.lua

3.3 Interpolate between faces

Interpolation

decNet=celeba_24_G.t7 encZnet=celeba_encZ_7.t7 encYnet=celeba_encY_5.t7 im1Path=[IM1] im2Path=[IM2] th generation/interpolate.lua

Do you like or use our work? Please cite us as

@inproceedings{Perarnau2016,
  author    = {Guim Perarnau and
               Joost van de Weijer and
               Bogdan Raducanu and
               Jose M. \'Alvarez},
  title     = {{Invertible Conditional GANs for image editing}},
  booktitle   = {NIPS Workshop on Adversarial Training},
  year      = {2016},
}
Owner
Guim
Guim
Official repo for AutoInt: Automatic Integration for Fast Neural Volume Rendering in CVPR 2021

AutoInt: Automatic Integration for Fast Neural Volume Rendering CVPR 2021 Project Page | Video | Paper PyTorch implementation of automatic integration

Stanford Computational Imaging Lab 149 Dec 22, 2022
The comma.ai Calibration Challenge!

Welcome to the comma.ai Calibration Challenge! Your goal is to predict the direction of travel (in camera frame) from provided dashcam video. This rep

comma.ai 697 Jan 05, 2023
DTCN SMP Challenge - Sequential prediction learning framework and algorithm

DTCN This is the implementation of our paper "Sequential Prediction of Social Me

Bobby 2 Jan 24, 2022
A resource for learning about deep learning techniques from regression to LSTM and Reinforcement Learning using financial data and the fitness functions of algorithmic trading

A tour through tensorflow with financial data I present several models ranging in complexity from simple regression to LSTM and policy networks. The s

195 Dec 07, 2022
Group Fisher Pruning for Practical Network Compression(ICML2021)

Group Fisher Pruning for Practical Network Compression (ICML2021) By Liyang Liu*, Shilong Zhang*, Zhanghui Kuang, Jing-Hao Xue, Aojun Zhou, Xinjiang W

Shilong Zhang 129 Dec 13, 2022
NeuTex: Neural Texture Mapping for Volumetric Neural Rendering

NeuTex: Neural Texture Mapping for Volumetric Neural Rendering Paper: https://arxiv.org/abs/2103.00762 Running Run on the provided DTU scene cd run ba

Fanbo Xiang 67 Dec 28, 2022
This program generates a random 12 digit/character password (upper and lowercase) and stores it in a file along with your username and app/website.

PasswordGeneratorAndVault This program generates a random 12 digit/character password (upper and lowercase) and stores it in a file along with your us

Chris 1 Feb 26, 2022
Prometheus exporter for Cisco Unified Computing System (UCS) Manager

prometheus-ucs-exporter Overview Use metrics from the UCS API to export relevant metrics to Prometheus This repository is a fork of Drew Stinnett's or

Marshall Wace 6 Nov 07, 2022
Identifying Stroke Indicators Using Rough Sets

Identifying Stroke Indicators Using Rough Sets With the spirit of reproducible research, this repository contains all the codes required to produce th

Muhammad Salman Pathan 0 Jun 09, 2022
DeepLab resnet v2 model in pytorch

pytorch-deeplab-resnet DeepLab resnet v2 model implementation in pytorch. The architecture of deepLab-ResNet has been replicated exactly as it is from

Isht Dwivedi 601 Dec 22, 2022
Understanding the Properties of Minimum Bayes Risk Decoding in Neural Machine Translation.

Understanding Minimum Bayes Risk Decoding This repo provides code and documentation for the following paper: Müller and Sennrich (2021): Understanding

ZurichNLP 13 May 01, 2022
Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2

Graph Transformer - Pytorch Implementation of Graph Transformer in Pytorch, for potential use in replicating Alphafold2. This was recently used by bot

Phil Wang 97 Dec 28, 2022
CS550 Machine Learning course project on CNN Detection.

CNN Detection (CS550 Machine Learning Project) Team Members (Tensor) : Yadava Kishore Chodipilli (11940310) Thashmitha BS (11941250) This is a work do

yaadava_kishore 2 Jan 30, 2022
Convert human motion from video to .bvh

video_to_bvh Convert human motion from video to .bvh with Google Colab Usage 1. Open video_to_bvh.ipynb in Google Colab Go to https://colab.research.g

Dene 306 Dec 10, 2022
BOOKSUM: A Collection of Datasets for Long-form Narrative Summarization

BOOKSUM: A Collection of Datasets for Long-form Narrative Summarization Authors: Wojciech Kryściński, Nazneen Rajani, Divyansh Agarwal, Caiming Xiong,

Salesforce 125 Dec 31, 2022
🏖 Keras Implementation of Painting outside the box

Keras implementation of Image OutPainting This is an implementation of Painting Outside the Box: Image Outpainting paper from Standford University. So

Bendang 1.1k Dec 10, 2022
PHOTONAI is a high level python API for designing and optimizing machine learning pipelines.

PHOTONAI is a high level python API for designing and optimizing machine learning pipelines. We've created a system in which you can easily select and

Medical Machine Learning Lab - University of Münster 57 Nov 12, 2022
Core ML tools contain supporting tools for Core ML model conversion, editing, and validation.

Core ML Tools Use coremltools to convert machine learning models from third-party libraries to the Core ML format. The Python package contains the sup

Apple 3k Jan 08, 2023
Numenta Platform for Intelligent Computing is an implementation of Hierarchical Temporal Memory (HTM), a theory of intelligence based strictly on the neuroscience of the neocortex.

NuPIC Numenta Platform for Intelligent Computing The Numenta Platform for Intelligent Computing (NuPIC) is a machine intelligence platform that implem

Numenta 6.3k Dec 30, 2022
PyTorch implementation for 3D human pose estimation

Towards 3D Human Pose Estimation in the Wild: a Weakly-supervised Approach This repository is the PyTorch implementation for the network presented in:

Xingyi Zhou 579 Dec 22, 2022