AttGAN: Facial Attribute Editing by Only Changing What You Want (IEEE TIP 2019)

Overview

News

  • 11 Jan 2020: We clean up the code to make it more readable! The old version is here: v1.

     


AttGAN
TIP Nov. 2019, arXiv Nov. 2017

TensorFlow implementation of AttGAN: Facial Attribute Editing by Only Changing What You Want.

Related

Exemplar Results

  • See results.md for more results, we try higher resolution and more attributes (all 40 attributes!!!)

  • Inverting 13 attributes respectively

    from left to right: Input, Reconstruction, Bald, Bangs, Black_Hair, Blond_Hair, Brown_Hair, Bushy_Eyebrows, Eyeglasses, Male, Mouth_Slightly_Open, Mustache, No_Beard, Pale_Skin, Young

Usage

  • Environment

    • Python 3.6

    • TensorFlow 1.15

    • OpenCV, scikit-image, tqdm, oyaml

    • we recommend Anaconda or Miniconda, then you can create the AttGAN environment with commands below

      conda create -n AttGAN python=3.6
      
      source activate AttGAN
      
      conda install opencv scikit-image tqdm tensorflow-gpu=1.15
      
      conda install -c conda-forge oyaml
    • NOTICE: if you create a new conda environment, remember to activate it before any other command

      source activate AttGAN
  • Data Preparation

    • Option 1: CelebA-unaligned (higher quality than the aligned data, 10.2GB)

      • download the dataset

      • unzip and process the data

        7z x ./data/img_celeba/img_celeba.7z/img_celeba.7z.001 -o./data/img_celeba/
        
        unzip ./data/img_celeba/annotations.zip -d ./data/img_celeba/
        
        python ./scripts/align.py
    • Option 2: CelebA-HQ (we use the data from CelebAMask-HQ, 3.2GB)

      • CelebAMask-HQ.zip (move to ./data/CelebAMask-HQ.zip): Google Drive or Baidu Netdisk

      • unzip and process the data

        unzip ./data/CelebAMask-HQ.zip -d ./data/
        
        python ./scripts/split_CelebA-HQ.py
  • Run AttGAN

    • training (see examples.md for more training commands)

      \\ for CelebA
      CUDA_VISIBLE_DEVICES=0 \
      python train.py \
      --load_size 143 \
      --crop_size 128 \
      --model model_128 \
      --experiment_name AttGAN_128
      
      \\ for CelebA-HQ
      CUDA_VISIBLE_DEVICES=0 \
      python train.py \
      --img_dir ./data/CelebAMask-HQ/CelebA-HQ-img \
      --train_label_path ./data/CelebAMask-HQ/train_label.txt \
      --val_label_path ./data/CelebAMask-HQ/val_label.txt \
      --load_size 128 \
      --crop_size 128 \
      --n_epochs 200 \
      --epoch_start_decay 100 \
      --model model_128 \
      --experiment_name AttGAN_128_CelebA-HQ
    • testing

      • single attribute editing (inversion)

        \\ for CelebA
        CUDA_VISIBLE_DEVICES=0 \
        python test.py \
        --experiment_name AttGAN_128
        
        \\ for CelebA-HQ
        CUDA_VISIBLE_DEVICES=0 \
        python test.py \
        --img_dir ./data/CelebAMask-HQ/CelebA-HQ-img \
        --test_label_path ./data/CelebAMask-HQ/test_label.txt \
        --experiment_name AttGAN_128_CelebA-HQ
      • multiple attribute editing (inversion) example

        \\ for CelebA
        CUDA_VISIBLE_DEVICES=0 \
        python test_multi.py \
        --test_att_names Bushy_Eyebrows Pale_Skin \
        --experiment_name AttGAN_128
      • attribute sliding example

        \\ for CelebA
        CUDA_VISIBLE_DEVICES=0 \
        python test_slide.py \
        --test_att_name Pale_Skin \
        --test_int_min -2 \
        --test_int_max 2 \
        --test_int_step 0.5 \
        --experiment_name AttGAN_128
    • loss visualization

      CUDA_VISIBLE_DEVICES='' \
      tensorboard \
      --logdir ./output/AttGAN_128/summaries \
      --port 6006
    • convert trained model to .pb file

      python to_pb.py --experiment_name AttGAN_128
  • Using Trained Weights

  • Example for Custom Dataset

Citation

If you find AttGAN useful in your research work, please consider citing:

@ARTICLE{8718508,
author={Z. {He} and W. {Zuo} and M. {Kan} and S. {Shan} and X. {Chen}},
journal={IEEE Transactions on Image Processing},
title={AttGAN: Facial Attribute Editing by Only Changing What You Want},
year={2019},
volume={28},
number={11},
pages={5464-5478},
keywords={Face;Facial features;Task analysis;Decoding;Image reconstruction;Hair;Gallium nitride;Facial attribute editing;attribute style manipulation;adversarial learning},
doi={10.1109/TIP.2019.2916751},
ISSN={1057-7149},
month={Nov},}
Comments
  •  TypeError

    TypeError

    hello I have downloaded trained model and trying to test it but i am getting following error. can u please suggest what went wrong?

    I am testing it on google colab and using only 182000 to 182637 images. TypeError: Input 'filename' of 'ReadFile' Op has type float32 that does not match expected type of string.

    opened by shbnm21 21
  • Unable to use different number of images

    Unable to use different number of images

    Hello. I am using hd - celeba 384 dataset with provided 384_shortcut1_inject1_none_hd model. I am trying to use custom number of images instead of using all 202599 images. I tried to do the following: modify list_attr_celeba.txt file to only include first 20 images and put these 20 images in ./data/img_crop_celeba/*.jpg. However, this is the error I get:

    TypeError: Input 'filename' of 'ReadFile' Op has type float32 that does not match expected type of string.

    I also tried to train with only 20 images and get the same error. I get no errors when running train/test for all 202599 images.

    opened by githubusername001 10
  • Questions about the handling of noise z in DTLCGAN with an encoder attached

    Questions about the handling of noise z in DTLCGAN with an encoder attached

    Hi, I am referring to your DTLCGAN code. According to your last reply, I added an encoder to it and have some questions about your code. In your train.py, the z_sample you choose to sample is generated by: z_ipt_samples = [np.stack([np.random.normal(size=[z_dim])] * len(c_ipt_sample)) for i in range(15)] which is of (15,18,100).

    So, right now I used an encoder, and the noise of z (as well as the z_ipt used in training) here should be replaced by the encoder's output, right?

    But, what does len(c_ipt_sample) here mean? You generated 18 noises for one testing sample? I counted your sampling training, the lowest layer in your decision tree does have 18 images( 233=18). So why do you generate testing sample from bottom to the top, but not the reverse? How can you be certain that this 18 noises all belong to the same person since you generated from bottom to top?

    Besides, should my encoder do the parellel, choosing 18 frontcodes of 18 images and uses them to do the sampling? It seems wrong here because the 18 frontcodes of mine are from 18 different images(or say 18 different persons), the resulting sampling tree was weird(some are ok, and I am confused about them). But if I used the same frontcode of one image(or say the same person) and copy it for 18 times, the training samples are the same, no change of attributes at all.

    opened by XijieJiao 8
  • Facial Attribute

    Facial Attribute

    Hello, @LynnHo Can you tell how we can do facial feature extraction, means if input any image of face and then how we can get the 40 facial attribute from that.

    Thanks.

    opened by xyzdcgan 8
  • The same result for all the attributes.

    The same result for all the attributes.

    As written in the title, I obtain a row of the same images without any changes regardless to the attribute (column). I use custom data-set organized as CelebA. Could you give an advise, what may cause it?

    opened by acecreamu 8
  • Attribute Classifier for Editing Accuracy/Error

    Attribute Classifier for Editing Accuracy/Error

    I'm curious what you used for the attribute classifier to measure the attribute editing accuracy and preservation error. Also do you have any plans to release this trained model? Thanks.

    Attribute Classifier 
    opened by tegillis 7
  • About the performance of pretrained model

    About the performance of pretrained model

    The pre-trained model you provided is not well-performing over the Celeb-A-HQ dataset. So I've got a question that for how many epochs you have trained the pre-trained model and on what data set. Another question is that my use case applies glasses to the face, so I need to know that if I trained a new model from scratch over the Celeb-A-HQ dataset it will help us to achieve my task. can we train the model over a single attribute like eyeglasses or a smile? Thanks in advance.

    opened by alan-ai-learner 4
  • Attribute Style Manipulation

    Attribute Style Manipulation

    Hi, thank you for sharing the great project. I found your attribute style manipulation particularly meaningful and useful for my recent research. I saw from previous issue that you have no plan to open source the code for this part. I have the following questions:

    1. I found nowhere in your paper as for how you derive your θ and the relationship between θ and the image, so how do you get the θ in an unsupervised way for each input?
    2. Is this part's idea (and the way you derived θ) based on the paper 'Generative Attribute Controller with Conditional Filtered Generative Adversarial Networks'? (I found their code is also not open source).
    3. If I want to realize this part myself, could you give me some hints of where to start or any papers and sources I could refer to (there is really very few works on accurate or multiple attribute style manipulation)?

    Thank you!

    opened by XijieJiao 4
  • Cannot get a desired result on CelebA-HQ dataset

    Cannot get a desired result on CelebA-HQ dataset

    Hi there,

    Your work is interesting. I have a problem. Could you help figure it out?

    I applied your method on CelebA-HQ dataset for a single attribute manipulation. But I cannot get the desired result. The result (the interested attribute is "Smiling") at the 59th training epoch is shown as follows. There is no change in the third column images. attgan

    Thanks and Regards,

    opened by EvaFlower 4
  • Hi, I have a question about the training and the test

    Hi, I have a question about the training and the test

    First, I appreciate your excellent work and have been interested in your work since 2018.

    I have a question about the test and training in your work. In advance, I clarify that I consider the case where the value of attributes is binary.

    For training, the value of attributes seems to be -1 or 1. (Read 0 or 1 then, *2 - 1 -> [-1, 1]) (https://github.com/LynnHo/AttGAN-Tensorflow/blob/master/train.py#L161)

    On the other hand, the range of attributes is [-2, 2] for test. ( Read 0 or 1 then, *2 - 1 -> [-1, 1], finally *2 -> [-2, 2] ) (https://github.com/LynnHo/AttGAN-Tensorflow/blob/master/train.py#L246, test_int = 2.0)

    Is it right that you use the different values of the attribute vector in training and test?

    I just find that I cannot reproduce the result of attribute classification without this trick. However, I can reproduce the result by using [-2, 2].

    Thanks!

    opened by FriedRonaldo 4
  • Why attributes are encoded into [-1, 1] not [0, 1]

    Why attributes are encoded into [-1, 1] not [0, 1]

    @LynnHo Hi, I read your code theses days, and I wonder about why the label of attributes has to map into [-1, 1] instead of [0, 1]. It seems that it is very important and has some technical reason because you commented on three exclamation marks on that code. Could you share some experimental knowledge about this?

    opened by ChengBinJin 4
  • Applying your code in datasets from masked face to non-masked face

    Applying your code in datasets from masked face to non-masked face

    I want to apply this code on Celeb-A with fake masked images and i want to remove mask .. so how can i apply this concept .. can you guide me if i can do it or no using your code ?? if yes where should i change in the code just train.py and data.py ??

    opened by Nuha1412 1
  • Style manipulation not robust, very sensitive to varied parameters

    Style manipulation not robust, very sensitive to varied parameters

    How do you get a good balance among varied hyper-parameters like different loss weights, learning_rate when style manipulation is adopted? I found the training of the network very unstable.

    I can get style manipulation results on bangs and eyeglasses, but the control is unstable and the sharpness of images are also affected. The control on eyeglasses is only on the shade and the model has no control on shape and size.

    Except for hyper-parameters, is there any other places where there can be problems like training settings?

    Besides, when implementing style manipulation, except for the loss of generated style controller, do you also use the original attribute loss?

    Looking forward to your answer. Thank you!

    opened by jiaoxijie 2
Releases(v1)
Owner
Zhenliang He
Zhenliang He
Predicting future trajectories of people in cameras of novel scenarios and views.

Pedestrian Trajectory Prediction Predicting future trajectories of pedestrians in cameras of novel scenarios and views. This repository contains the c

8 Sep 03, 2022
Automatic 2D-to-3D Video Conversion with CNNs

Deep3D: Automatic 2D-to-3D Video Conversion with CNNs How To Run To run this code. Please install MXNet following the official document. Deep3D requir

Eric Junyuan Xie 1.2k Dec 30, 2022
Code for "Sparse Steerable Convolutions: An Efficient Learning of SE(3)-Equivariant Features for Estimation and Tracking of Object Poses in 3D Space"

Sparse Steerable Convolution (SS-Conv) Code for "Sparse Steerable Convolutions: An Efficient Learning of SE(3)-Equivariant Features for Estimation and

25 Dec 21, 2022
Official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Recognition" in AAAI2022.

AimCLR This is an official PyTorch implementation of "Contrastive Learning from Extremely Augmented Skeleton Sequences for Self-supervised Action Reco

Gty 44 Dec 17, 2022
Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)

Pytorch implementation of paper "Efficient Nearest Neighbor Language Models" (EMNLP 2021)

Junxian He 57 Jan 01, 2023
Code for reproducible experiments presented in KSD Aggregated Goodness-of-fit Test.

Code for KSDAgg: a KSD aggregated goodness-of-fit test This GitHub repository contains the code for the reproducible experiments presented in our pape

Antonin Schrab 5 Dec 15, 2022
Learning Correspondence from the Cycle-consistency of Time (CVPR 2019)

TimeCycle Code for Learning Correspondence from the Cycle-consistency of Time (CVPR 2019, Oral). The code is developed based on the PyTorch framework,

Xiaolong Wang 706 Nov 29, 2022
I will implement Fastai in each projects present in this repository.

DEEP LEARNING FOR CODERS WITH FASTAI AND PYTORCH The repository contains a list of the projects which I have worked on while reading the book Deep Lea

Thinam Tamang 43 Dec 20, 2022
Author Disambiguation using Knowledge Graph Embeddings with Literals

Author Name Disambiguation with Knowledge Graph Embeddings using Literals This is the repository for the master thesis project on Knowledge Graph Embe

12 Oct 19, 2022
LTR_CrossEncoder: Legal Text Retrieval Zalo AI Challenge 2021

LTR_CrossEncoder: Legal Text Retrieval Zalo AI Challenge 2021 We propose a cross encoder model (LTR_CrossEncoder) for information retrieval, re-retrie

Hieu Duong 7 Jan 12, 2022
Jremesh-tools - Blender addon for quad remeshing

JRemesh Tools Blender 2.8 - 3.x addon for quad remeshing. Currently it is a wrap

Jayanam 89 Dec 30, 2022
potpourri3d - An invigorating blend of 3D geometry tools in Python.

A Python library of various algorithms and utilities for 3D triangle meshes and point clouds. Managed by Nicholas Sharp, with new tools added lazily as needed. Currently, mainly bindings to C++ tools

Nicholas Sharp 295 Jan 05, 2023
Dense Unsupervised Learning for Video Segmentation (NeurIPS*2021)

Dense Unsupervised Learning for Video Segmentation This repository contains the official implementation of our paper: Dense Unsupervised Learning for

Visual Inference Lab @TU Darmstadt 173 Dec 26, 2022
A general-purpose encoder-decoder framework for Tensorflow

READ THE DOCUMENTATION CONTRIBUTING A general-purpose encoder-decoder framework for Tensorflow that can be used for Machine Translation, Text Summariz

Google 5.5k Jan 07, 2023
Code of paper Interact, Embed, and EnlargE (IEEE): Boosting Modality-specific Representations for Multi-Modal Person Re-identification.

Interact, Embed, and EnlargE (IEEE): Boosting Modality-specific Representations for Multi-Modal Person Re-identification We provide the codes for repr

12 Dec 12, 2022
Code of the paper "Deep Human Dynamics Prior" in ACM MM 2021.

Code of the paper "Deep Human Dynamics Prior" in ACM MM 2021. Figure 1: In the process of motion capture (mocap), some joints or even the whole human

Shinny cui 3 Oct 31, 2022
Laplacian Score-regularized Concrete Autoencoders

Laplacian Score-regularized Concrete Autoencoders Requirements: torch = 1.9 scikit-learn = 0.24 omegaconf = 2.0.6 scipy = 1.6.0 matplotlib How to

JS 6 Dec 07, 2022
Official implementation of "Motif-based Graph Self-Supervised Learning forMolecular Property Prediction"

Motif-based Graph Self-Supervised Learning for Molecular Property Prediction Official Pytorch implementation of NeurIPS'21 paper "Motif-based Graph Se

zaixi 71 Dec 20, 2022
Python version of the amazing Reaction Mechanism Generator (RMG).

Reaction Mechanism Generator (RMG) Description This repository contains the Python version of Reaction Mechanism Generator (RMG), a tool for automatic

Reaction Mechanism Generator 284 Dec 27, 2022
A Decentralized Omnidirectional Visual-Inertial-UWB State Estimation System for Aerial Swar.

Omni-swarm A Decentralized Omnidirectional Visual-Inertial-UWB State Estimation System for Aerial Swarm Introduction Omni-swarm is a decentralized omn

HKUST Aerial Robotics Group 99 Dec 23, 2022