Unofficial PyTorch Implementation for HifiFace (https://arxiv.org/abs/2106.09965)

Overview

HifiFace — Unofficial Pytorch Implementation

Image source: HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping (figure 1, pg. 1)

issueBadge starBadge repoSize lastCommit

This repository is an unofficial implementation of the face swapping model proposed by Wang et. al in their paper HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping. This implementation makes use of the Pytorch Lighting library, a light-weight wrapper for PyTorch.

HifiFace Overview

The task of face swapping applies the face and the identity of the source person to the head of the target.

The HifiFace architecture can be broken up into three primary structures. The 3D shape-aware identity extractor, the semantic facial fusion module, and an encoder-decoder structure. A high-level overview of the architecture can be seen in the image below.

Image source: HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping (figure 2, pg. 3)

Changes from the original paper

Dataset

In the paper, the author used VGGFace2 and Asian-Celeb as the training dataset. Unfortunately, the Asian-Celeb dataset can only be accessed with a Baidu account, which we do not have. Thus, we only use VGGFace2 for our training dateset.

Model

The paper proposes two versions of HifiFace model based on the output image size: 256x256 and 512x512 (referred to as Ours-256 and Ours-512 in the paper). The 512x512 model uses an extra data preprocessing before training. In this open source project, we implement the 256x256 model. For the discriminator, the original paperuses the discriminator from StarGAN v2. Our implementation uses the multi-scale discriminator from SPADE.

Installation

Build Docker Image

git clone https://github.com/mindslab-ai/hififace 
cd hififace
git clone https://github.com/sicxu/Deep3DFaceRecon_pytorch && git clone https://github.com/NVlabs/nvdiffrast && git clone https://github.com/deepinsight/insightface.git
cp -r insightface/recognition/arcface_torch/ Deep3DFaceRecon_pytorch/models/
cp -r insightface/recognition/arcface_torch/ ./model/
rm -rf insightface
cp -rf 3DMM/* Deep3DFaceRecon_pytorch
mv Deep3DFaceRecon_pytorch model/
rm -rf 3DMM
docker build -t hififace:latent .
rm -rf nvdiffrast

This Dockerfile was inspired by @yuzhou164, this issue from Deep3DFaceRecon_pytorch.

Pre-Trained Model for Deep3DFace PyTorch

Follow the guideline in Prepare prerequisite models

Set up at ./mode/Deep3DFaceRecon_pytorch/

Pre-Trained Models for ArcFace

We used official Arcface per-trained pytorch implementation Download pre-trained checkpoint from onedrive (IResNet-100 trained on MS1MV3)

Download HifiFace Pre-Trained Model

google drive link trained on VGGFace2, 300K iterations

Training

Dataset & Preprocessing

Align & Crop

We aligned the face images with the landmark extracted by 3DDFA_V2. The code will be added.

Face Segmentation Map

After finishing aligning the face images, you need to get the face segmentation map for each face images. We used face segmentation model that PSFRGAN provides. You can use their code and pre-trained model.

Dataset Folder Structure

Each face image and the corresponding segmentation map should have the same name and the same relative path from the top-level directory.

face_image_dataset_folder
└───identity1
│   │   image1.png
│   │   image2.png
│   │   ...
│   
└───identity2
│   │   image1.png
│   │   image2.png
│   │   ...
│ 
|   ...

face_segmentation_mask_folder
└───identity1
│   │   image1.png
│   │   image2.png
│   │   ...
│   
└───identity2
│   │   image1.png
│   │   image2.png
│   │   ...
│ 
|   ...

Wandb

Wandb is a powerful tool to manage your model training. Please make a wandb account and a wandb project for training HifiFace with our training code.

Changing the Configuration

  • config/model.yaml

    • dataset.train.params.image_root: directory path to the training dataset images
    • dataset.train.params.parsing_root: directory path to the training dataset parsing images
    • dataset.validation.params.image_root: directory path to the validation dataset images
    • dataset.validation.params.parsing_root: directory path to the validation dataset parsing images
  • config/trainer.yaml

    • checkpoint.save_dir: directory where the checkpoints will be saved
    • wandb: fill out your wandb entity and project name

Run Docker Container

docker run -it --ipc host --gpus all -v /PATH_TO/hififace:/workspace -v /PATH_TO/DATASET/FOLDER:/DATA --name hififace hififace:latent

Run Training Code

python hififace_trainer.py --model_config config/model.yaml --train_config config/trainer.yaml -n hififace

Inference

Single Image

python hififace_inference --gpus 0 --model_config config/model.yaml --model_checkpoint_path hififace_opensouce_299999.ckpt --source_image_path asset/inference_sample/01_source.png --target_image_path asset/inference_sample/01_target.png --output_image_path ./01_result.png

All Posible Pairs of Images in Directory

python hififace_inference --gpus 0 --model_config config/model.yaml --model_checkpoint_path hififace_opensouce_299999.ckpt  --input_directory_path asset/inference_sample --output_image_path ./result.png

Interpolation

# interpolates both the identity and the 3D shape.
python hififace_inference --gpus 0 --model_config config/model.yaml --model_checkpoint_path hififace_opensouce_299999.ckpt --source_image_path asset/inference_sample/01_source.png --target_image_path asset/inference_sample/01_target.png --output_image_path ./01_result_all.gif  --interpolation_all 

# interpolates only the identity.
python hififace_inference --gpus 0 --model_config config/model.yaml --model_checkpoint_path hififace_opensouce_299999.ckpt --source_image_path asset/inference_sample/01_source.png --target_image_path asset/inference_sample/01_target.png --output_image_path ./01_result_identity.gif  --interpolation_identity

# interpolates only the 3D shape.
python hififace_inference --gpus 0 --model_config config/model.yaml --model_checkpoint_path hififace_opensouce_299999.ckpt --source_image_path asset/inference_sample/01_source.png --target_image_path asset/inference_sample/01_target.png --output_image_path ./01_result_3d.gif  --interpolation_3d

Our Results

The results from our pre-trained model.

GIF interpolaiton results from Obama to Trump to Biden back to Obama. The left image interpolates both the identity and the 3D shape. The middle image interpolates only the identity. The right image interpolates only the 3D shape.

To-Do List

  • Pre-processing Code
  • Colab Notebook

License

BSD 3-Clause License.

Implementation Author

Changho Choi @ MINDs Lab, Inc. ([email protected])

Matthew B. Webster @ MINDs Lab, Inc. ([email protected])

Citations

@article{DBLP:journals/corr/abs-2106-09965,
  author    = {Yuhan Wang and
               Xu Chen and
               Junwei Zhu and
               Wenqing Chu and
               Ying Tai and
               Chengjie Wang and
               Jilin Li and
               Yongjian Wu and
               Feiyue Huang and
               Rongrong Ji},
  title     = {HifiFace: 3D Shape and Semantic Prior Guided High Fidelity Face Swapping},
  journal   = {CoRR},
  volume    = {abs/2106.09965},
  year      = {2021}
}
Owner
MINDs Lab
MINDsLab provides AI platform and various AI engines based on deep machine learning.
MINDs Lab
Pytorch implementation of the paper Progressive Growing of Points with Tree-structured Generators (BMVC 2021)

PGpoints Pytorch implementation of the paper Progressive Growing of Points with Tree-structured Generators (BMVC 2021) Hyeontae Son, Young Min Kim Pre

Hyeontae Son 9 Jun 06, 2022
Best Practices on Recommendation Systems

Recommenders What's New (February 4, 2021) We have a new relase Recommenders 2021.2! It comes with lots of bug fixes, optimizations and 3 new algorith

Microsoft 14.8k Jan 03, 2023
This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021.

PyTorch implementation of DAQ This is an official implementation of the paper "Distance-aware Quantization", accepted to ICCV2021. For more informatio

CV Lab @ Yonsei University 36 Nov 04, 2022
PyTorch Implementation of Daft-Exprt: Robust Prosody Transfer Across Speakers for Expressive Speech Synthesis

Daft-Exprt - PyTorch Implementation PyTorch Implementation of Daft-Exprt: Robust Prosody Transfer Across Speakers for Expressive Speech Synthesis The

Keon Lee 47 Dec 18, 2022
Network Pruning That Matters: A Case Study on Retraining Variants (ICLR 2021)

Network Pruning That Matters: A Case Study on Retraining Variants (ICLR 2021)

Duong H. Le 18 Jun 13, 2022
Measuring Coding Challenge Competence With APPS

Measuring Coding Challenge Competence With APPS This is the repository for Measuring Coding Challenge Competence With APPS by Dan Hendrycks*, Steven B

Dan Hendrycks 218 Dec 27, 2022
Educational 2D SLAM implementation based on ICP and Pose Graph

slam-playground Educational 2D SLAM implementation based on ICP and Pose Graph How to use: Use keyboard arrow keys to navigate robot. Press 'r' to vie

Kirill 19 Dec 17, 2022
Unified Interface for Constructing and Managing Workflows on different workflow engines, such as Argo Workflows, Tekton Pipelines, and Apache Airflow.

Couler What is Couler? Couler aims to provide a unified interface for constructing and managing workflows on different workflow engines, such as Argo

Couler Project 781 Jan 03, 2023
A PyTorch based deep learning library for drug pair scoring.

Documentation | External Resources | Datasets | Examples ChemicalX is a deep learning library for drug-drug interaction, polypharmacy side effect and

AstraZeneca 597 Dec 30, 2022
Erpnext app for make employee salary on payroll entry based on one or more project with percentage for all project equal 100 %

Project Payroll this app for make payroll for employee based on projects like project on 30 % and project 2 70 % as account dimension it makes genral

Ibrahim Morghim 8 Jan 02, 2023
Codebase to experiment with a hybrid Transformer that combines conditional sequence generation with regression

Regression Transformer Codebase to experiment with a hybrid Transformer that combines conditional sequence generation with regression . Development se

International Business Machines 27 Jan 05, 2023
Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation

Uncertainty Estimation via Response Scaling for Pseudo-mask Noise Mitigation in Weakly-supervised Semantic Segmentation Introduction This is a PyTorch

XMed-Lab 30 Sep 23, 2022
🍅🍅🍅YOLOv5-Lite: lighter, faster and easier to deploy. Evolved from yolov5 and the size of model is only 1.7M (int8) and 3.3M (fp16). It can reach 10+ FPS on the Raspberry Pi 4B when the input size is 320×320~

YOLOv5-Lite:lighter, faster and easier to deploy Perform a series of ablation experiments on yolov5 to make it lighter (smaller Flops, lower memory, a

pogg 1.5k Jan 05, 2023
A solution to ensure Crowd Management with Contactless and Safe systems.

CovidTrack A Solution to ensure Crowd Management with Contactless and Safe systems. ML Model Mask Detection Social Distancing Detection Analytics Page

Om Khare 1 Nov 10, 2021
Minimalist Error collection Service compatible with Rollbar clients. Sentry or Rollbar alternative.

Minimalist Error collection Service Features Compatible with any Rollbar client(see https://docs.rollbar.com/docs). Just change the endpoint URL to yo

Haukur Rósinkranz 381 Nov 11, 2022
This repository contains PyTorch code for Robust Vision Transformers.

This repository contains PyTorch code for Robust Vision Transformers.

117 Dec 07, 2022
Machine learning, in numpy

numpy-ml Ever wish you had an inefficient but somewhat legible collection of machine learning algorithms implemented exclusively in NumPy? No? Install

David Bourgin 11.6k Dec 30, 2022
EMNLP 2021 Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections

Adapting Language Models for Zero-shot Learning by Meta-tuning on Dataset and Prompt Collections Ruiqi Zhong, Kristy Lee*, Zheng Zhang*, Dan Klein EMN

Ruiqi Zhong 42 Nov 03, 2022
Data, model training, and evaluation code for "PubTables-1M: Towards a universal dataset and metrics for training and evaluating table extraction models".

PubTables-1M This repository contains training and evaluation code for the paper "PubTables-1M: Towards a universal dataset and metrics for training a

Microsoft 365 Jan 04, 2023
Pytorch Geometric Tutorials

Pytorch Geometric Tutorials

Antonio Longa 648 Jan 08, 2023