Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set (CVPRW 2019). A PyTorch implementation.

Overview

Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set —— PyTorch implementation

This is an unofficial official pytorch implementation of the following paper:

Y. Deng, J. Yang, S. Xu, D. Chen, Y. Jia, and X. Tong, Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set, IEEE Computer Vision and Pattern Recognition Workshop (CVPRW) on Analysis and Modeling of Faces and Gestures (AMFG), 2019. (Best Paper Award!)

The method enforces a hybrid-level weakly-supervised training for CNN-based 3D face reconstruction. It is fast, accurate, and robust to pose and occlussions. It achieves state-of-the-art performance on multiple datasets such as FaceWarehouse, MICC Florence and NoW Challenge.

For the original tensorflow implementation, check this repo.

This implementation is written by S. Xu.

Performance

● Reconstruction accuracy

The pytorch implementation achieves lower shape reconstruction error (9% improvement) compare to the original tensorflow implementation. Quantitative evaluation (average shape errors in mm) on several benchmarks is as follows:

Method FaceWareHouse MICC Florence NoW Challenge
Deep3DFace Tensorflow 1.81±0.50 1.67±0.50 1.54±1.29
Deep3DFace PyTorch 1.64±0.50 1.53±0.45 1.41±1.21

The comparison result with state-of-the-art public 3D face reconstruction methods on the NoW face benchmark is as follows:

Rank Method Median(mm) Mean(mm) Std(mm)
1. DECA[Feng et al., SIGGRAPH 2021] 1.09 1.38 1.18
2. Deep3DFace PyTorch 1.11 1.41 1.21
3. RingNet [Sanyal et al., CVPR 2019] 1.21 1.53 1.31
4. Deep3DFace [Deng et al., CVPRW 2019] 1.23 1.54 1.29
5. 3DDFA-V2 [Guo et al., ECCV 2020] 1.23 1.57 1.39
6. MGCNet [Shang et al., ECCV 2020] 1.31 1.87 2.63
7. PRNet [Feng et al., ECCV 2018] 1.50 1.98 1.88
8. 3DMM-CNN [Tran et al., CVPR 2017] 1.84 2.33 2.05

For more details about the evaluation, check Now Challenge website.

● Visual quality

The pytorch implementation achieves better visual consistency with the input images compare to the original tensorflow version.

● Speed

The training speed is on par with the original tensorflow implementation. For more information, see here.

Major changes

● Differentiable renderer

We use Nvdiffrast which is a pytorch library that provides high-performance primitive operations for rasterization-based differentiable rendering. The original tensorflow implementation used tf_mesh_renderer instead.

● Face recognition model

We use Arcface, a state-of-the-art face recognition model, for perceptual loss computation. By contrast, the original tensorflow implementation used Facenet.

● Training configuration

Data augmentation is used in the training process which contains random image shifting, scaling, rotation, and flipping. We also enlarge the training batchsize from 5 to 32 to stablize the training process.

● Training data

We use an extra high quality face image dataset FFHQ to increase the diversity of training data.

Requirements

This implementation is only tested under Ubuntu environment with Nvidia GPUs and CUDA installed.

Installation

  1. Clone the repository and set up a conda environment with all dependencies as follows:
git clone https://github.com/sicxu/Deep3DFaceRecon_pytorch.git --recursive
cd Deep3DFaceRecon_pytorch
conda env create -f environment.yml
source activate deep3d_pytorch
  1. Install Nvdiffrast library:
cd nvdiffrast    # ./Deep3DFaceRecon_pytorch/nvdiffrast
pip install .
  1. Install Arcface Pytorch:
cd ..    # ./Deep3DFaceRecon_pytorch
git clone https://github.com/deepinsight/insightface.git
cp -r ./insightface/recognition/arcface_torch/ ./models/

Inference with a pre-trained model

Prepare prerequisite models

  1. Our method uses Basel Face Model 2009 (BFM09) to represent 3d faces. Get access to BFM09 using this link. After getting the access, download "01_MorphableModel.mat". In addition, we use an Expression Basis provided by Guo et al.. Download the Expression Basis (Exp_Pca.bin) using this link (google drive). Organize all files into the following structure:
Deep3DFaceRecon_pytorch
│
└─── BFM
    │
    └─── 01_MorphableModel.mat
    │
    └─── Exp_Pca.bin
    |
    └─── ...
  1. We provide a model trained on a combination of CelebA, LFW, 300WLP, IJB-A, LS3D-W, and FFHQ datasets. Download the pre-trained model using this link (google drive) and organize the directory into the following structure:
Deep3DFaceRecon_pytorch
│
└─── checkpoints
    │
    └─── <model_name>
        │
        └─── epoch_20.pth

Test with custom images

To reconstruct 3d faces from test images, organize the test image folder as follows:

Deep3DFaceRecon_pytorch
│
└─── <folder_to_test_images>
    │
    └─── *.jpg/*.png
    |
    └─── detections
        |
	└─── *.txt

The *.jpg/*.png files are test images. The *.txt files are detected 5 facial landmarks with a shape of 5x2, and have the same name as the corresponding images. Check ./datasets/examples for a reference.

Then, run the test script:

# get reconstruction results of your custom images
python test.py --name=<model_name> --epoch=20 --img_folder=<folder_to_test_images>

# get reconstruction results of example images
python test.py --name=<model_name> --epoch=20 --img_folder=./datasets/examples

Results will be saved into ./checkpoints/<model_name>/results/<folder_to_test_images>, which contain the following files:

*.png A combination of cropped input image, reconstructed image, and visualization of projected landmarks.
*.obj Reconstructed 3d face mesh with predicted color (texture+illumination) in the world coordinate space. Best viewed in Meshlab.
*.mat Predicted 257-dimensional coefficients and 68 projected 2d facial landmarks. Best viewd in Matlab.

Training a model from scratch

Prepare prerequisite models

  1. We rely on Arcface to extract identity features for loss computation. Download the pre-trained model from Arcface using this link. By default, we use the resnet50 backbone (ms1mv3_arcface_r50_fp16), organize the download files into the following structure:
Deep3DFaceRecon_pytorch
│
└─── checkpoints
    │
    └─── recog_model
        │
        └─── ms1mv3_arcface_r50_fp16
	    |
	    └─── backbone.pth
  1. We initialize R-Net using the weights trained on ImageNet. Download the weights provided by PyTorch using this link, and organize the file as the following structure:
Deep3DFaceRecon_pytorch
│
└─── checkpoints
    │
    └─── init_model
        │
        └─── resnet50-0676ba61.pth
  1. We provide a landmark detector (tensorflow model) to extract 68 facial landmarks for loss computation. The detector is trained on 300WLP, LFW, and LS3D-W datasets. Download the trained model using this link (google drive) and organize the file as follows:
Deep3DFaceRecon_pytorch
│
└─── checkpoints
    │
    └─── lm_model
        │
        └─── 68lm_detector.pb

Data preparation

  1. To train a model with custom images,5 facial landmarks of each image are needed in advance for an image pre-alignment process. We recommend using dlib or MTCNN to detect these landmarks. Then, organize all files into the following structure:
Deep3DFaceRecon_pytorch
│
└─── datasets
    │
    └─── <folder_to_training_images>
        │
        └─── *.png/*.jpg
	|
	└─── detections
            |
	    └─── *.txt

The *.txt files contain 5 facial landmarks with a shape of 5x2, and should have the same name with their corresponding images.

  1. Generate 68 landmarks and skin attention mask for images using the following script:
# preprocess training images
python data_preparation.py --img_folder <folder_to_training_images>

# alternatively, you can preprocess multiple image folders simultaneously
python data_preparation.py --img_folder <folder_to_training_images1> <folder_to_training_images2> <folder_to_training_images3>

# preprocess validation images
python data_preparation.py --img_folder <folder_to_validation_images> --mode=val

The script will generate files of landmarks and skin masks, and save them into ./datasets/<folder_to_training_images>. In addition, it also generates a file containing the path of all training data into ./datalist which will then be used in the training script.

Train the face reconstruction network

Run the following script to train a face reconstruction model using the pre-processed data:

# train with single GPU
python train.py --name=<custom_experiment_name> --gpu_ids=0

# train with multiple GPUs
python train.py --name=<custom_experiment_name> --gpu_ids=0,1

# train with other custom settings
python train.py --name=<custom_experiment_name> --gpu_ids=0 --batch_size=32 --n_epochs=20

Training logs and model parameters will be saved into ./checkpoints/<custom_experiment_name>.

By default, the script uses a batchsize of 32 and will train the model with 20 epochs. For reference, the pre-trained model in this repo is trained with the default setting on a image collection of 300k images. A single iteration takes 0.8~0.9s on a single Tesla M40 GPU. The total training process takes around two days.

To use a trained model, see Inference section.

Contact

If you have any questions, please contact the paper authors.

Citation

Please cite the following paper if this model helps your research:

@inproceedings{deng2019accurate,
    title={Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set},
    author={Yu Deng and Jiaolong Yang and Sicheng Xu and Dong Chen and Yunde Jia and Xin Tong},
    booktitle={IEEE Computer Vision and Pattern Recognition Workshops},
    year={2019}
}

The face images on this page are from the public CelebA dataset released by MMLab, CUHK.

Part of the code in this implementation takes CUT as a reference.

Owner
Sicheng Xu
Sicheng Xu
face2comics by Sxela (Alex Spirin) - face2comics datasets

This is a paired face to comics dataset, which can be used to train pix2pix or similar networks.

Alex 164 Nov 13, 2022
A real-time motion capture system that estimates poses and global translations using only 6 inertial measurement units

TransPose Code for our SIGGRAPH 2021 paper "TransPose: Real-time 3D Human Translation and Pose Estimation with Six Inertial Sensors". This repository

Xinyu Yi 261 Dec 31, 2022
A FAIR dataset of TCV experimental results for validating edge/divertor turbulence models.

TCV-X21 validation for divertor turbulence simulations Quick links Intro Welcome to TCV-X21. We're glad you've found us! This repository is designed t

0 Dec 18, 2021
Implementation of the Chamfer Distance as a module for pyTorch

Chamfer Distance for pyTorch This is an implementation of the Chamfer Distance as a module for pyTorch. It is written as a custom C++/CUDA extension.

Christian Diller 205 Jan 05, 2023
Hydra Lightning Template for Structured Configs

Hydra Lightning Template for Structured Configs Template for creating projects with pytorch-lightning and hydra. How to use this template? Create your

Model-driven Machine Learning 4 Jul 19, 2022
[CVPR 2022] Deep Equilibrium Optical Flow Estimation

Deep Equilibrium Optical Flow Estimation This is the official repo for the paper Deep Equilibrium Optical Flow Estimation (CVPR 2022), by Shaojie Bai*

CMU Locus Lab 136 Dec 18, 2022
Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt

Feed forward VQGAN-CLIP model, where the goal is to eliminate the need for optimizing the latent space of VQGAN for each input prompt. This is done by

Mehdi Cherti 135 Dec 30, 2022
Official PyTorch Implementation for InfoSwap: Information Bottleneck Disentanglement for Identity Swapping

InfoSwap: Information Bottleneck Disentanglement for Identity Swapping Code usage Please check out the user manual page. Paper Gege Gao, Huaibo Huang,

Grace Hešeri 56 Dec 20, 2022
Pytorch Implementation of Auto-Compressing Subset Pruning for Semantic Image Segmentation

Pytorch Implementation of Auto-Compressing Subset Pruning for Semantic Image Segmentation Introduction ACoSP is an online pruning algorithm that compr

Merantix 8 Dec 07, 2022
Simple, but essential Bayesian optimization package

BayesO: A Bayesian optimization framework in Python Simple, but essential Bayesian optimization package. http://bayeso.org Online documentation Instal

Jungtaek Kim 74 Dec 05, 2022
A visualisation tool for Deep Reinforcement Learning

DRLVIS - Visualising Deep Reinforcement Learning Created by Marios Sirtmatsis with the support of Alex Bäuerle. DRLVis is an application used for visu

Marios Sirtmatsis 1 Nov 04, 2021
Official implementation of EdiTTS: Score-based Editing for Controllable Text-to-Speech

EdiTTS: Score-based Editing for Controllable Text-to-Speech Official implementation of EdiTTS: Score-based Editing for Controllable Text-to-Speech. Au

Neosapience 98 Dec 25, 2022
Uni-Fold: Training your own deep protein-folding models.

Uni-Fold: Training your own deep protein-folding models. This package provides and implementation of a trainable, Transformer-based deep protein foldi

DeepModeling 88 Jan 03, 2023
YOLOX Win10 Project

Introduction 这是一个用于Windows训练YOLOX的项目,相比于官方项目,做了一些适配和修改: 1、解决了Windows下import yolox失败,No such file or directory: 'xxx.xml'等路径问题 2、CUDA out of memory等显存不

5 Jun 08, 2022
COVID-Net Open Source Initiative

The COVID-Net models provided here are intended to be used as reference models that can be built upon and enhanced as new data becomes available

Linda Wang 1.1k Dec 26, 2022
Simple reimplemetation experiments about FcaNet

FcaNet-CIFAR An implementation of the paper FcaNet: Frequency Channel Attention Networks on CIFAR10/CIFAR100 dataset. how to run Code: python Cifar.py

76 Feb 04, 2021
Container : Context Aggregation Network

Container : Context Aggregation Network If you use this code for a paper please cite: @article{gao2021container, title={Container: Context Aggregati

AI2 47 Dec 16, 2022
LQM - Improving Object Detection by Estimating Bounding Box Quality Accurately

Improving Object Detection by Estimating Bounding Box Quality Accurately Abstract Object detection aims to locate and classify object instances in ima

IM Lab., POSTECH 0 Sep 28, 2022
Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving

GSAN Introduction Code for paper GSAN: Graph Self-Attention Network for Learning Spatial-Temporal Interaction Representation in Autonomous Driving, wh

YE Luyao 6 Oct 27, 2022
[CoRL 2021] A robotics benchmark for cross-embodiment imitation.

x-magical x-magical is a benchmark extension of MAGICAL specifically geared towards cross-embodiment imitation. The tasks still provide the Demo/Test

Kevin Zakka 36 Nov 26, 2022