InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing

Overview

InterFaceGAN - Interpreting the Latent Space of GANs for Semantic Face Editing

Python 3.7 pytorch 1.1.0 TensorFlow 1.12.2 sklearn 0.21.2

image Figure: High-quality facial attributes editing results with InterFaceGAN.

In this repository, we propose an approach, termed as InterFaceGAN, for semantic face editing. Specifically, InterFaceGAN is capable of turning an unconditionally trained face synthesis model to controllable GAN by interpreting the very first latent space and finding the hidden semantic subspaces.

[Paper (CVPR)] [Paper (TPAMI)] [Project Page] [Demo] [Colab]

How to Use

Pick up a model, pick up a boundary, pick up a latent code, and then EDIT!

# Before running the following code, please first download
# the pre-trained ProgressiveGAN model on CelebA-HQ dataset,
# and then place it under the folder ".models/pretrain/".
LATENT_CODE_NUM=10
python edit.py \
    -m pggan_celebahq \
    -b boundaries/pggan_celebahq_smile_boundary.npy \
    -n "$LATENT_CODE_NUM" \
    -o results/pggan_celebahq_smile_editing

GAN Models Used (Prior Work)

Before going into details, we would like to first introduce the two state-of-the-art GAN models used in this work, which are ProgressiveGAN (Karras el al., ICLR 2018) and StyleGAN (Karras et al., CVPR 2019). These two models achieve high-quality face synthesis by learning unconditional GANs. For more details about these two models, please refer to the original papers, as well as the official implementations.

ProgressiveGAN: [Paper] [Code]

StyleGAN: [Paper] [Code]

Code Instruction

Generative Models

A GAN-based generative model basically maps the latent codes (commonly sampled from high-dimensional latent space, such as standart normal distribution) to photo-realistic images. Accordingly, a base class for generator, called BaseGenerator, is defined in models/base_generator.py. Basically, it should contains following member functions:

  • build(): Build a pytorch module.
  • load(): Load pre-trained weights.
  • convert_tf_model() (Optional): Convert pre-trained weights from tensorflow model.
  • sample(): Randomly sample latent codes. This function should specify what kind of distribution the latent code is subject to.
  • preprocess(): Function to preprocess the latent codes before feeding it into the generator.
  • synthesize(): Run the model to get synthesized results (or any other intermediate outputs).
  • postprocess(): Function to postprocess the outputs from generator to convert them to images.

We have already provided following models in this repository:

  • ProgressiveGAN:
    • A clone of official tensorflow implementation: models/pggan_tf_official/. This clone is only used for converting tensorflow pre-trained weights to pytorch ones. This conversion will be done automitally when the model is used for the first time. After that, tensorflow version is not used anymore.
    • Pytorch implementation of official model (just for inference): models/pggan_generator_model.py.
    • Generator class derived from BaseGenerator: models/pggan_generator.py.
    • Please download the official released model trained on CelebA-HQ dataset and place it in folder models/pretrain/.
  • StyleGAN:
    • A clone of official tensorflow implementation: models/stylegan_tf_official/. This clone is only used for converting tensorflow pre-trained weights to pytorch ones. This conversion will be done automitally when the model is used for the first time. After that, tensorflow version is not used anymore.
    • Pytorch implementation of official model (just for inference): models/stylegan_generator_model.py.
    • Generator class derived from BaseGenerator: models/stylegan_generator.py.
    • Please download the official released models trained on CelebA-HQ dataset and FF-HQ dataset and place them in folder models/pretrain/.
    • Support synthesizing images from $\mathcal{Z}$ space, $\mathcal{W}$ space, and extended $\mathcal{W}$ space (18x512).
    • Set truncation trick and noise randomization trick in models/model_settings.py. Among them, STYLEGAN_RANDOMIZE_NOISE is highly recommended to set as False. STYLEGAN_TRUNCATION_PSI = 0.7 and STYLEGAN_TRUNCATION_LAYERS = 8 are inherited from official implementation. Users can customize their own models. NOTE: These three settings will NOT affect the pre-trained weights.
  • Customized model:
    • Users can do experiments with their own models by easily deriving new class from BaseGenerator.
    • Before used, new model should be first registered in MODEL_POOL in file models/model_settings.py.

Utility Functions

We provide following utility functions in utils/manipulator.py to make InterFaceGAN much easier to use.

  • train_boundary(): This function can be used for boundary searching. It takes pre-prepared latent codes and the corresponding attributes scores as inputs, and then outputs the normal direction of the separation boundary. Basically, this goal is achieved by training a linear SVM. The returned vector can be further used for semantic face editing.
  • project_boundary(): This function can be used for conditional manipulation. It takes a primal direction and other conditional directions as inputs, and then outputs a new normalized direction. Moving latent code along this new direction will manipulate the primal attribute yet barely affect the conditioned attributes. NOTE: For now, at most two conditions are supported.
  • linear_interpolate(): This function can be used for semantic face editing. It takes a latent code and the normal direction of a particular semantic boundary as inputs, and then outputs a collection of manipulated latent codes with linear interpolation. These interpolation can be used to see how the synthesis will vary if moving the latent code along the given direction.

Tools

  • generate_data.py: This script can be used for data preparation. It will generate a collection of syntheses (images are saved for further attribute prediction) as well as save the input latent codes.

  • train_boundary.py: This script can be used for boundary searching.

  • edit.py: This script can be usd for semantic face editing.

Usage

We take ProgressiveGAN model trained on CelebA-HQ dataset as an instance.

Prepare data

NUM=10000
python generate_data.py -m pggan_celebahq -o data/pggan_celebahq -n "$NUM"

Predict Attribute Score

Get your own predictor for attribute $ATTRIBUTE_NAME, evaluate on all generated images, and save the inference results as data/pggan_celebahq/"$ATTRIBUTE_NAME"_scores.npy. NOTE: The save results should be with shape ($NUM, 1).

Search Semantic Boundary

python train_boundary.py \
    -o boundaries/pggan_celebahq_"$ATTRIBUTE_NAME" \
    -c data/pggan_celebahq/z.npy \
    -s data/pggan_celebahq/"$ATTRIBUTE_NAME"_scores.npy

Compute Conditional Boundary (Optional)

This step is optional. It depends on whether conditional manipulation is needed. Users can use function project_boundary() in file utils/manipulator.py to compute the projected direction.

Boundaries Description

We provided following boundaries in folder boundaries/. The boundaries can be more accurate if stronger attribute predictor is used.

  • ProgressiveGAN model trained on CelebA-HQ dataset:

    • Single boundary:
      • pggan_celebahq_pose_boundary.npy: Pose.
      • pggan_celebahq_smile_boundary.npy: Smile (expression).
      • pggan_celebahq_age_boundary.npy: Age.
      • pggan_celebahq_gender_boundary.npy: Gender.
      • pggan_celebahq_eyeglasses_boundary.npy: Eyeglasses.
      • pggan_celebahq_quality_boundary.npy: Image quality.
    • Conditional boundary:
      • pggan_celebahq_age_c_gender_boundary.npy: Age (conditioned on gender).
      • pggan_celebahq_age_c_eyeglasses_boundary.npy: Age (conditioned on eyeglasses).
      • pggan_celebahq_age_c_gender_eyeglasses_boundary.npy: Age (conditioned on gender and eyeglasses).
      • pggan_celebahq_gender_c_age_boundary.npy: Gender (conditioned on age).
      • pggan_celebahq_gender_c_eyeglasses_boundary.npy: Gender (conditioned on eyeglasses).
      • pggan_celebahq_gender_c_age_eyeglasses_boundary.npy: Gender (conditioned on age and eyeglasses).
      • pggan_celebahq_eyeglasses_c_age_boundary.npy: Eyeglasses (conditioned on age).
      • pggan_celebahq_eyeglasses_c_gender_boundary.npy: Eyeglasses (conditioned on gender).
      • pggan_celebahq_eyeglasses_c_age_gender_boundary.npy: Eyeglasses (conditioned on age and gender).
  • StyleGAN model trained on CelebA-HQ dataset:

    • Single boundary in $\mathcal{Z}$ space:
      • stylegan_celebahq_pose_boundary.npy: Pose.
      • stylegan_celebahq_smile_boundary.npy: Smile (expression).
      • stylegan_celebahq_age_boundary.npy: Age.
      • stylegan_celebahq_gender_boundary.npy: Gender.
      • stylegan_celebahq_eyeglasses_boundary.npy: Eyeglasses.
    • Single boundary in $\mathcal{W}$ space:
      • stylegan_celebahq_pose_w_boundary.npy: Pose.
      • stylegan_celebahq_smile_w_boundary.npy: Smile (expression).
      • stylegan_celebahq_age_w_boundary.npy: Age.
      • stylegan_celebahq_gender_w_boundary.npy: Gender.
      • stylegan_celebahq_eyeglasses_w_boundary.npy: Eyeglasses.
  • StyleGAN model trained on FF-HQ dataset:

    • Single boundary in $\mathcal{Z}$ space:
      • stylegan_ffhq_pose_boundary.npy: Pose.
      • stylegan_ffhq_smile_boundary.npy: Smile (expression).
      • stylegan_ffhq_age_boundary.npy: Age.
      • stylegan_ffhq_gender_boundary.npy: Gender.
      • stylegan_ffhq_eyeglasses_boundary.npy: Eyeglasses.
    • Conditional boundary in $\mathcal{Z}$ space:
      • stylegan_ffhq_age_c_gender_boundary.npy: Age (conditioned on gender).
      • stylegan_ffhq_age_c_eyeglasses_boundary.npy: Age (conditioned on eyeglasses).
      • stylegan_ffhq_eyeglasses_c_age_boundary.npy: Eyeglasses (conditioned on age).
      • stylegan_ffhq_eyeglasses_c_gender_boundary.npy: Eyeglasses (conditioned on gender).
    • Single boundary in $\mathcal{W}$ space:
      • stylegan_ffhq_pose_w_boundary.npy: Pose.
      • stylegan_ffhq_smile_w_boundary.npy: Smile (expression).
      • stylegan_ffhq_age_w_boundary.npy: Age.
      • stylegan_ffhq_gender_w_boundary.npy: Gender.
      • stylegan_ffhq_eyeglasses_w_boundary.npy: Eyeglasses.

BibTeX

@inproceedings{shen2020interpreting,
  title     = {Interpreting the Latent Space of GANs for Semantic Face Editing},
  author    = {Shen, Yujun and Gu, Jinjin and Tang, Xiaoou and Zhou, Bolei},
  booktitle = {CVPR},
  year      = {2020}
}
@article{shen2020interfacegan,
  title   = {InterFaceGAN: Interpreting the Disentangled Face Representation Learned by GANs},
  author  = {Shen, Yujun and Yang, Ceyuan and Tang, Xiaoou and Zhou, Bolei},
  journal = {TPAMI},
  year    = {2020}
}
Owner
GenForce: May Generative Force Be with You
Research on Generative Modeling in Zhou Group
GenForce: May Generative Force Be with You
This repository contains the code and models necessary to replicate the results of paper: How to Robustify Black-Box ML Models? A Zeroth-Order Optimization Perspective

Black-Box-Defense This repository contains the code and models necessary to replicate the results of our recent paper: How to Robustify Black-Box ML M

OPTML Group 2 Oct 05, 2022
Provably Rare Gem Miner.

Provably Rare Gem Miner just another random project by yoyoismee.eth useful link main site market contract useful thing you should know read contract

34 Nov 22, 2022
U-Net for GBM

My Final Year Project(FYP) In National University of Singapore(NUS) You need Pytorch(stable 1.9.1) Both cuda version and cpu version are OK File Str

PinkR1ver 1 Oct 27, 2021
Code for visualizing the loss landscape of neural nets

Visualizing the Loss Landscape of Neural Nets This repository contains the PyTorch code for the paper Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer

Tom Goldstein 2.2k Jan 09, 2023
The final project of "Applying AI to EHR Data" of "AI for Healthcare" nanodegree - Udacity.

Patient Selection for Diabetes Drug Testing Project Overview EHR data is becoming a key source of real-world evidence (RWE) for the pharmaceutical ind

Omar Laham 1 Jan 14, 2022
A Pytorch implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE

SMU_pytorch A Pytorch Implementation of SMU: SMOOTH ACTIVATION FUNCTION FOR DEEP NETWORKS USING SMOOTHING MAXIMUM TECHNIQUE arXiv https://arxiv.org/ab

Fuhang 36 Dec 24, 2022
Implementation of Invariant Point Attention, used for coordinate refinement in the structure module of Alphafold2, as a standalone Pytorch module

Invariant Point Attention - Pytorch Implementation of Invariant Point Attention as a standalone module, which was used in the structure module of Alph

Phil Wang 113 Jan 05, 2023
Pseudo-Visual Speech Denoising

Pseudo-Visual Speech Denoising This code is for our paper titled: Visual Speech Enhancement Without A Real Visual Stream published at WACV 2021. Autho

Sindhu 94 Oct 22, 2022
Pretrained Pytorch face detection (MTCNN) and recognition (InceptionResnet) models

Face Recognition Using Pytorch Python 3.7 3.6 3.5 Status This is a repository for Inception Resnet (V1) models in pytorch, pretrained on VGGFace2 and

Tim Esler 3.3k Jan 04, 2023
This is the official pytorch implementation of AutoDebias, an automatic debiasing method for recommendation.

AutoDebias This is the official pytorch implementation of AutoDebias, a debiasing method for recommendation system. AutoDebias is proposed in the pape

Dong Hande 77 Nov 25, 2022
Accelerated SMPL operation, commonly used in generate 3D human mesh, STAR included.

SMPL2 An enchanced and accelerated SMPL operation which commonly used in 3D human mesh generation. It takes a poses, shapes, cam_trans as inputs, outp

JinTian 20 Oct 17, 2022
Self-Supervised Generative Style Transfer for One-Shot Medical Image Segmentation

Self-Supervised Generative Style Transfer for One-Shot Medical Image Segmentation This repository contains the Pytorch implementation of the proposed

Devavrat Tomar 19 Nov 10, 2022
3ds-Ghidra-Scripts - Ghidra scripts to help with 3ds reverse engineering

3ds Ghidra Scripts These are ghidra scripts to help with 3ds reverse engineering

Zak 7 May 23, 2022
MicRank is a Learning to Rank neural channel selection framework where a DNN is trained to rank microphone channels.

MicRank: Learning to Rank Microphones for Distant Speech Recognition Application Scenario Many applications nowadays envision the presence of multiple

Samuele Cornell 20 Nov 10, 2022
Implicit Model Specialization through DAG-based Decentralized Federated Learning

Federated Learning DAG Experiments This repository contains software artifacts to reproduce the experiments presented in the Middleware '21 paper "Imp

Operating Systems and Middleware Group 5 Oct 16, 2022
"Learning and Analyzing Generation Order for Undirected Sequence Models" in Findings of EMNLP, 2021

undirected-generation-dev This repo contains the source code of the models described in the following paper "Learning and Analyzing Generation Order f

Yichen Jiang 0 Mar 25, 2022
Deep Networks with Recurrent Layer Aggregation

RLA-Net: Recurrent Layer Aggregation Recurrence along Depth: Deep Networks with Recurrent Layer Aggregation This is an implementation of RLA-Net (acce

Joy Fang 21 Aug 16, 2022
Implementation of paper "DCS-Net: Deep Complex Subtractive Neural Network for Monaural Speech Enhancement"

DCS-Net This is the implementation of "DCS-Net: Deep Complex Subtractive Neural Network for Monaural Speech Enhancement" Steps to run the model Edit V

Jack Walters 10 Apr 04, 2022
Intel® Nervana™ reference deep learning framework committed to best performance on all hardware

DISCONTINUATION OF PROJECT. This project will no longer be maintained by Intel. Intel will not provide or guarantee development of or support for this

Nervana 3.9k Dec 20, 2022
Tensorflow implementation of "BEGAN: Boundary Equilibrium Generative Adversarial Networks"

BEGAN in Tensorflow Tensorflow implementation of BEGAN: Boundary Equilibrium Generative Adversarial Networks. Requirements Python 2.7 or 3.x Pillow tq

Taehoon Kim 922 Dec 21, 2022