AttGAN: Facial Attribute Editing by Only Changing What You Want (IEEE TIP 2019)

Overview

News

  • 11 Jan 2020: We clean up the code to make it more readable! The old version is here: v1.

     


AttGAN
TIP Nov. 2019, arXiv Nov. 2017

TensorFlow implementation of AttGAN: Facial Attribute Editing by Only Changing What You Want.

Related

Exemplar Results

  • See results.md for more results, we try higher resolution and more attributes (all 40 attributes!!!)

  • Inverting 13 attributes respectively

    from left to right: Input, Reconstruction, Bald, Bangs, Black_Hair, Blond_Hair, Brown_Hair, Bushy_Eyebrows, Eyeglasses, Male, Mouth_Slightly_Open, Mustache, No_Beard, Pale_Skin, Young

Usage

  • Environment

    • Python 3.6

    • TensorFlow 1.15

    • OpenCV, scikit-image, tqdm, oyaml

    • we recommend Anaconda or Miniconda, then you can create the AttGAN environment with commands below

      conda create -n AttGAN python=3.6
      
      source activate AttGAN
      
      conda install opencv scikit-image tqdm tensorflow-gpu=1.15
      
      conda install -c conda-forge oyaml
    • NOTICE: if you create a new conda environment, remember to activate it before any other command

      source activate AttGAN
  • Data Preparation

    • Option 1: CelebA-unaligned (higher quality than the aligned data, 10.2GB)

      • download the dataset

      • unzip and process the data

        7z x ./data/img_celeba/img_celeba.7z/img_celeba.7z.001 -o./data/img_celeba/
        
        unzip ./data/img_celeba/annotations.zip -d ./data/img_celeba/
        
        python ./scripts/align.py
    • Option 2: CelebA-HQ (we use the data from CelebAMask-HQ, 3.2GB)

      • CelebAMask-HQ.zip (move to ./data/CelebAMask-HQ.zip): Google Drive or Baidu Netdisk

      • unzip and process the data

        unzip ./data/CelebAMask-HQ.zip -d ./data/
        
        python ./scripts/split_CelebA-HQ.py
  • Run AttGAN

    • training (see examples.md for more training commands)

      \\ for CelebA
      CUDA_VISIBLE_DEVICES=0 \
      python train.py \
      --load_size 143 \
      --crop_size 128 \
      --model model_128 \
      --experiment_name AttGAN_128
      
      \\ for CelebA-HQ
      CUDA_VISIBLE_DEVICES=0 \
      python train.py \
      --img_dir ./data/CelebAMask-HQ/CelebA-HQ-img \
      --train_label_path ./data/CelebAMask-HQ/train_label.txt \
      --val_label_path ./data/CelebAMask-HQ/val_label.txt \
      --load_size 128 \
      --crop_size 128 \
      --n_epochs 200 \
      --epoch_start_decay 100 \
      --model model_128 \
      --experiment_name AttGAN_128_CelebA-HQ
    • testing

      • single attribute editing (inversion)

        \\ for CelebA
        CUDA_VISIBLE_DEVICES=0 \
        python test.py \
        --experiment_name AttGAN_128
        
        \\ for CelebA-HQ
        CUDA_VISIBLE_DEVICES=0 \
        python test.py \
        --img_dir ./data/CelebAMask-HQ/CelebA-HQ-img \
        --test_label_path ./data/CelebAMask-HQ/test_label.txt \
        --experiment_name AttGAN_128_CelebA-HQ
      • multiple attribute editing (inversion) example

        \\ for CelebA
        CUDA_VISIBLE_DEVICES=0 \
        python test_multi.py \
        --test_att_names Bushy_Eyebrows Pale_Skin \
        --experiment_name AttGAN_128
      • attribute sliding example

        \\ for CelebA
        CUDA_VISIBLE_DEVICES=0 \
        python test_slide.py \
        --test_att_name Pale_Skin \
        --test_int_min -2 \
        --test_int_max 2 \
        --test_int_step 0.5 \
        --experiment_name AttGAN_128
    • loss visualization

      CUDA_VISIBLE_DEVICES='' \
      tensorboard \
      --logdir ./output/AttGAN_128/summaries \
      --port 6006
    • convert trained model to .pb file

      python to_pb.py --experiment_name AttGAN_128
  • Using Trained Weights

  • Example for Custom Dataset

Citation

If you find AttGAN useful in your research work, please consider citing:

@ARTICLE{8718508,
author={Z. {He} and W. {Zuo} and M. {Kan} and S. {Shan} and X. {Chen}},
journal={IEEE Transactions on Image Processing},
title={AttGAN: Facial Attribute Editing by Only Changing What You Want},
year={2019},
volume={28},
number={11},
pages={5464-5478},
keywords={Face;Facial features;Task analysis;Decoding;Image reconstruction;Hair;Gallium nitride;Facial attribute editing;attribute style manipulation;adversarial learning},
doi={10.1109/TIP.2019.2916751},
ISSN={1057-7149},
month={Nov},}
Comments
  •  TypeError

    TypeError

    hello I have downloaded trained model and trying to test it but i am getting following error. can u please suggest what went wrong?

    I am testing it on google colab and using only 182000 to 182637 images. TypeError: Input 'filename' of 'ReadFile' Op has type float32 that does not match expected type of string.

    opened by shbnm21 21
  • Unable to use different number of images

    Unable to use different number of images

    Hello. I am using hd - celeba 384 dataset with provided 384_shortcut1_inject1_none_hd model. I am trying to use custom number of images instead of using all 202599 images. I tried to do the following: modify list_attr_celeba.txt file to only include first 20 images and put these 20 images in ./data/img_crop_celeba/*.jpg. However, this is the error I get:

    TypeError: Input 'filename' of 'ReadFile' Op has type float32 that does not match expected type of string.

    I also tried to train with only 20 images and get the same error. I get no errors when running train/test for all 202599 images.

    opened by githubusername001 10
  • Questions about the handling of noise z in DTLCGAN with an encoder attached

    Questions about the handling of noise z in DTLCGAN with an encoder attached

    Hi, I am referring to your DTLCGAN code. According to your last reply, I added an encoder to it and have some questions about your code. In your train.py, the z_sample you choose to sample is generated by: z_ipt_samples = [np.stack([np.random.normal(size=[z_dim])] * len(c_ipt_sample)) for i in range(15)] which is of (15,18,100).

    So, right now I used an encoder, and the noise of z (as well as the z_ipt used in training) here should be replaced by the encoder's output, right?

    But, what does len(c_ipt_sample) here mean? You generated 18 noises for one testing sample? I counted your sampling training, the lowest layer in your decision tree does have 18 images( 233=18). So why do you generate testing sample from bottom to the top, but not the reverse? How can you be certain that this 18 noises all belong to the same person since you generated from bottom to top?

    Besides, should my encoder do the parellel, choosing 18 frontcodes of 18 images and uses them to do the sampling? It seems wrong here because the 18 frontcodes of mine are from 18 different images(or say 18 different persons), the resulting sampling tree was weird(some are ok, and I am confused about them). But if I used the same frontcode of one image(or say the same person) and copy it for 18 times, the training samples are the same, no change of attributes at all.

    opened by XijieJiao 8
  • Facial Attribute

    Facial Attribute

    Hello, @LynnHo Can you tell how we can do facial feature extraction, means if input any image of face and then how we can get the 40 facial attribute from that.

    Thanks.

    opened by xyzdcgan 8
  • The same result for all the attributes.

    The same result for all the attributes.

    As written in the title, I obtain a row of the same images without any changes regardless to the attribute (column). I use custom data-set organized as CelebA. Could you give an advise, what may cause it?

    opened by acecreamu 8
  • Attribute Classifier for Editing Accuracy/Error

    Attribute Classifier for Editing Accuracy/Error

    I'm curious what you used for the attribute classifier to measure the attribute editing accuracy and preservation error. Also do you have any plans to release this trained model? Thanks.

    Attribute Classifier 
    opened by tegillis 7
  • About the performance of pretrained model

    About the performance of pretrained model

    The pre-trained model you provided is not well-performing over the Celeb-A-HQ dataset. So I've got a question that for how many epochs you have trained the pre-trained model and on what data set. Another question is that my use case applies glasses to the face, so I need to know that if I trained a new model from scratch over the Celeb-A-HQ dataset it will help us to achieve my task. can we train the model over a single attribute like eyeglasses or a smile? Thanks in advance.

    opened by alan-ai-learner 4
  • Attribute Style Manipulation

    Attribute Style Manipulation

    Hi, thank you for sharing the great project. I found your attribute style manipulation particularly meaningful and useful for my recent research. I saw from previous issue that you have no plan to open source the code for this part. I have the following questions:

    1. I found nowhere in your paper as for how you derive your θ and the relationship between θ and the image, so how do you get the θ in an unsupervised way for each input?
    2. Is this part's idea (and the way you derived θ) based on the paper 'Generative Attribute Controller with Conditional Filtered Generative Adversarial Networks'? (I found their code is also not open source).
    3. If I want to realize this part myself, could you give me some hints of where to start or any papers and sources I could refer to (there is really very few works on accurate or multiple attribute style manipulation)?

    Thank you!

    opened by XijieJiao 4
  • Cannot get a desired result on CelebA-HQ dataset

    Cannot get a desired result on CelebA-HQ dataset

    Hi there,

    Your work is interesting. I have a problem. Could you help figure it out?

    I applied your method on CelebA-HQ dataset for a single attribute manipulation. But I cannot get the desired result. The result (the interested attribute is "Smiling") at the 59th training epoch is shown as follows. There is no change in the third column images. attgan

    Thanks and Regards,

    opened by EvaFlower 4
  • Hi, I have a question about the training and the test

    Hi, I have a question about the training and the test

    First, I appreciate your excellent work and have been interested in your work since 2018.

    I have a question about the test and training in your work. In advance, I clarify that I consider the case where the value of attributes is binary.

    For training, the value of attributes seems to be -1 or 1. (Read 0 or 1 then, *2 - 1 -> [-1, 1]) (https://github.com/LynnHo/AttGAN-Tensorflow/blob/master/train.py#L161)

    On the other hand, the range of attributes is [-2, 2] for test. ( Read 0 or 1 then, *2 - 1 -> [-1, 1], finally *2 -> [-2, 2] ) (https://github.com/LynnHo/AttGAN-Tensorflow/blob/master/train.py#L246, test_int = 2.0)

    Is it right that you use the different values of the attribute vector in training and test?

    I just find that I cannot reproduce the result of attribute classification without this trick. However, I can reproduce the result by using [-2, 2].

    Thanks!

    opened by FriedRonaldo 4
  • Why attributes are encoded into [-1, 1] not [0, 1]

    Why attributes are encoded into [-1, 1] not [0, 1]

    @LynnHo Hi, I read your code theses days, and I wonder about why the label of attributes has to map into [-1, 1] instead of [0, 1]. It seems that it is very important and has some technical reason because you commented on three exclamation marks on that code. Could you share some experimental knowledge about this?

    opened by ChengBinJin 4
  • Applying your code in datasets from masked face to non-masked face

    Applying your code in datasets from masked face to non-masked face

    I want to apply this code on Celeb-A with fake masked images and i want to remove mask .. so how can i apply this concept .. can you guide me if i can do it or no using your code ?? if yes where should i change in the code just train.py and data.py ??

    opened by Nuha1412 1
  • Style manipulation not robust, very sensitive to varied parameters

    Style manipulation not robust, very sensitive to varied parameters

    How do you get a good balance among varied hyper-parameters like different loss weights, learning_rate when style manipulation is adopted? I found the training of the network very unstable.

    I can get style manipulation results on bangs and eyeglasses, but the control is unstable and the sharpness of images are also affected. The control on eyeglasses is only on the shade and the model has no control on shape and size.

    Except for hyper-parameters, is there any other places where there can be problems like training settings?

    Besides, when implementing style manipulation, except for the loss of generated style controller, do you also use the original attribute loss?

    Looking forward to your answer. Thank you!

    opened by jiaoxijie 2
Releases(v1)
Owner
Zhenliang He
Zhenliang He
This project uses ViT to perform image classification tasks on DATA set CIFAR10.

Vision-Transformer-Multiprocess-DistributedDataParallel-Apex Introduction This project uses ViT to perform image classification tasks on DATA set CIFA

Kaicheng Yang 3 Jun 03, 2022
Implementation of Geometric Vector Perceptron, a simple circuit for 3d rotation equivariance for learning over large biomolecules, in Pytorch. Idea proposed and accepted at ICLR 2021

Geometric Vector Perceptron Implementation of Geometric Vector Perceptron, a simple circuit with 3d rotation equivariance for learning over large biom

Phil Wang 59 Nov 24, 2022
Satellite labelling tool for manual labelling of storm top features such as overshooting tops, above-anvil plumes, cold U/Vs, rings etc.

Satellite labelling tool About this app A tool for manual labelling of storm top features such as overshooting tops, above-anvil plumes, cold U/Vs, ri

Czech Hydrometeorological Institute - Satellite Department 10 Sep 14, 2022
[v1 (ISBI'21) + v2] MedMNIST: A Large-Scale Lightweight Benchmark for 2D and 3D Biomedical Image Classification

MedMNIST Project (Website) | Dataset (Zenodo) | Paper (arXiv) | MedMNIST v1 (ISBI'21) Jiancheng Yang, Rui Shi, Donglai Wei, Zequan Liu, Lin Zhao, Bili

683 Dec 28, 2022
This is the repository for paper NEEDLE: Towards Non-invertible Backdoor Attack to Deep Learning Models.

This is the repository for paper NEEDLE: Towards Non-invertible Backdoor Attack to Deep Learning Models.

1 Oct 25, 2021
Agent-based model simulator for air quality and pandemic risk assessment in architectural spaces

Agent-based model simulation for air quality and pandemic risk assessment in architectural spaces. User Guide archABM is a fast and open source agent-

Vicomtech 10 Dec 05, 2022
Official implementation of the paper "AAVAE: Augmentation-AugmentedVariational Autoencoders"

AAVAE Official implementation of the paper "AAVAE: Augmentation-AugmentedVariational Autoencoders" Abstract Recent methods for self-supervised learnin

Grid AI Labs 48 Dec 12, 2022
Official implementation of VQ-Diffusion

Vector Quantized Diffusion Model for Text-to-Image Synthesis Overview This is the official repo for the paper: [Vector Quantized Diffusion Model for T

Microsoft 592 Jan 03, 2023
CZU-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and 10 wearable inertial sensors

CZU-MHAD: A multimodal dataset for human action recognition utilizing a depth camera and 10 wearable inertial sensors   In order to facilitate the res

yujmo 11 Dec 12, 2022
InvTorch: memory-efficient models with invertible functions

InvTorch: Memory-Efficient Invertible Functions This module extends the functionality of torch.utils.checkpoint.checkpoint to work with invertible fun

Modar M. Alfadly 12 May 12, 2022
torchbearer: A model fitting library for PyTorch

Note: We're moving to PyTorch Lightning! Read about the move here. From the end of February, torchbearer will no longer be actively maintained. We'll

632 Dec 13, 2022
Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis (CVPR2022)

Multi-View Consistent Generative Adversarial Networks for 3D-aware Image Synthesis Multi-View Consistent Generative Adversarial Networks for 3D-aware

Xuanmeng Zhang 78 Dec 10, 2022
pytorch implementation of GPV-Pose

GPV-Pose Pytorch implementation of GPV-Pose: Category-level Object Pose Estimation via Geometry-guided Point-wise Voting. (link) UPDATE A new version

40 Dec 01, 2022
Official PyTorch implementation of Spatial Dependency Networks.

Spatial Dependency Networks: Neural Layers for Improved Generative Image Modeling Đorđe Miladinović   Aleksandar Stanić   Stefan Bauer   Jürgen Schmid

Djordje Miladinovic 34 Jan 19, 2022
This repository contains the implementation of the HealthGen model, a generative model to synthesize realistic EHR time series data with missingness

HealthGen: Conditional EHR Time Series Generation This repository contains the implementation of the HealthGen model, a generative model to synthesize

0 Jan 20, 2022
A face dataset generator with out-of-focus blur detection and dynamic interval adjustment.

A face dataset generator with out-of-focus blur detection and dynamic interval adjustment.

Yutian Liu 2 Jan 29, 2022
Official implementation of deep-multi-trajectory-based single object tracking (IEEE T-CSVT 2021).

DeepMTA_PyTorch Officical PyTorch Implementation of "Dynamic Attention-guided Multi-TrajectoryAnalysis for Single Object Tracking", Xiao Wang, Zhe Che

Xiao Wang(王逍) 7 Dec 03, 2022
Character Grounding and Re-Identification in Story of Videos and Text Descriptions

Character in Story Identification Network (CiSIN) This project hosts the code for our paper. Youngjae Yu, Jongseok Kim, Heeseung Yun, Jiwan Chung and

8 Dec 09, 2022
Face Depixelizer based on "PULSE: Self-Supervised Photo Upsampling via Latent Space Exploration of Generative Models" repository.

NOTE We have noticed a lot of concern that PULSE will be used to identify individuals whose faces have been blurred out. We want to emphasize that thi

Denis Malimonov 2k Dec 29, 2022
Bagua is a flexible and performant distributed training algorithm development framework.

Bagua is a flexible and performant distributed training algorithm development framework.

786 Dec 17, 2022