Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set (CVPRW 2019). A PyTorch implementation.

Overview

Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set —— PyTorch implementation

This is an unofficial official pytorch implementation of the following paper:

Y. Deng, J. Yang, S. Xu, D. Chen, Y. Jia, and X. Tong, Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set, IEEE Computer Vision and Pattern Recognition Workshop (CVPRW) on Analysis and Modeling of Faces and Gestures (AMFG), 2019. (Best Paper Award!)

The method enforces a hybrid-level weakly-supervised training for CNN-based 3D face reconstruction. It is fast, accurate, and robust to pose and occlussions. It achieves state-of-the-art performance on multiple datasets such as FaceWarehouse, MICC Florence and NoW Challenge.

For the original tensorflow implementation, check this repo.

This implementation is written by S. Xu.

Performance

● Reconstruction accuracy

The pytorch implementation achieves lower shape reconstruction error (9% improvement) compare to the original tensorflow implementation. Quantitative evaluation (average shape errors in mm) on several benchmarks is as follows:

Method FaceWareHouse MICC Florence NoW Challenge
Deep3DFace Tensorflow 1.81±0.50 1.67±0.50 1.54±1.29
Deep3DFace PyTorch 1.64±0.50 1.53±0.45 1.41±1.21

The comparison result with state-of-the-art public 3D face reconstruction methods on the NoW face benchmark is as follows:

Rank Method Median(mm) Mean(mm) Std(mm)
1. DECA[Feng et al., SIGGRAPH 2021] 1.09 1.38 1.18
2. Deep3DFace PyTorch 1.11 1.41 1.21
3. RingNet [Sanyal et al., CVPR 2019] 1.21 1.53 1.31
4. Deep3DFace [Deng et al., CVPRW 2019] 1.23 1.54 1.29
5. 3DDFA-V2 [Guo et al., ECCV 2020] 1.23 1.57 1.39
6. MGCNet [Shang et al., ECCV 2020] 1.31 1.87 2.63
7. PRNet [Feng et al., ECCV 2018] 1.50 1.98 1.88
8. 3DMM-CNN [Tran et al., CVPR 2017] 1.84 2.33 2.05

For more details about the evaluation, check Now Challenge website.

● Visual quality

The pytorch implementation achieves better visual consistency with the input images compare to the original tensorflow version.

● Speed

The training speed is on par with the original tensorflow implementation. For more information, see here.

Major changes

● Differentiable renderer

We use Nvdiffrast which is a pytorch library that provides high-performance primitive operations for rasterization-based differentiable rendering. The original tensorflow implementation used tf_mesh_renderer instead.

● Face recognition model

We use Arcface, a state-of-the-art face recognition model, for perceptual loss computation. By contrast, the original tensorflow implementation used Facenet.

● Training configuration

Data augmentation is used in the training process which contains random image shifting, scaling, rotation, and flipping. We also enlarge the training batchsize from 5 to 32 to stablize the training process.

● Training data

We use an extra high quality face image dataset FFHQ to increase the diversity of training data.

Requirements

This implementation is only tested under Ubuntu environment with Nvidia GPUs and CUDA installed.

Installation

  1. Clone the repository and set up a conda environment with all dependencies as follows:
git clone https://github.com/sicxu/Deep3DFaceRecon_pytorch.git --recursive
cd Deep3DFaceRecon_pytorch
conda env create -f environment.yml
source activate deep3d_pytorch
  1. Install Nvdiffrast library:
cd nvdiffrast    # ./Deep3DFaceRecon_pytorch/nvdiffrast
pip install .
  1. Install Arcface Pytorch:
cd ..    # ./Deep3DFaceRecon_pytorch
git clone https://github.com/deepinsight/insightface.git
cp -r ./insightface/recognition/arcface_torch/ ./models/

Inference with a pre-trained model

Prepare prerequisite models

  1. Our method uses Basel Face Model 2009 (BFM09) to represent 3d faces. Get access to BFM09 using this link. After getting the access, download "01_MorphableModel.mat". In addition, we use an Expression Basis provided by Guo et al.. Download the Expression Basis (Exp_Pca.bin) using this link (google drive). Organize all files into the following structure:
Deep3DFaceRecon_pytorch
│
└─── BFM
    │
    └─── 01_MorphableModel.mat
    │
    └─── Exp_Pca.bin
    |
    └─── ...
  1. We provide a model trained on a combination of CelebA, LFW, 300WLP, IJB-A, LS3D-W, and FFHQ datasets. Download the pre-trained model using this link (google drive) and organize the directory into the following structure:
Deep3DFaceRecon_pytorch
│
└─── checkpoints
    │
    └─── <model_name>
        │
        └─── epoch_20.pth

Test with custom images

To reconstruct 3d faces from test images, organize the test image folder as follows:

Deep3DFaceRecon_pytorch
│
└─── <folder_to_test_images>
    │
    └─── *.jpg/*.png
    |
    └─── detections
        |
	└─── *.txt

The *.jpg/*.png files are test images. The *.txt files are detected 5 facial landmarks with a shape of 5x2, and have the same name as the corresponding images. Check ./datasets/examples for a reference.

Then, run the test script:

# get reconstruction results of your custom images
python test.py --name=<model_name> --epoch=20 --img_folder=<folder_to_test_images>

# get reconstruction results of example images
python test.py --name=<model_name> --epoch=20 --img_folder=./datasets/examples

Results will be saved into ./checkpoints/<model_name>/results/<folder_to_test_images>, which contain the following files:

*.png A combination of cropped input image, reconstructed image, and visualization of projected landmarks.
*.obj Reconstructed 3d face mesh with predicted color (texture+illumination) in the world coordinate space. Best viewed in Meshlab.
*.mat Predicted 257-dimensional coefficients and 68 projected 2d facial landmarks. Best viewd in Matlab.

Training a model from scratch

Prepare prerequisite models

  1. We rely on Arcface to extract identity features for loss computation. Download the pre-trained model from Arcface using this link. By default, we use the resnet50 backbone (ms1mv3_arcface_r50_fp16), organize the download files into the following structure:
Deep3DFaceRecon_pytorch
│
└─── checkpoints
    │
    └─── recog_model
        │
        └─── ms1mv3_arcface_r50_fp16
	    |
	    └─── backbone.pth
  1. We initialize R-Net using the weights trained on ImageNet. Download the weights provided by PyTorch using this link, and organize the file as the following structure:
Deep3DFaceRecon_pytorch
│
└─── checkpoints
    │
    └─── init_model
        │
        └─── resnet50-0676ba61.pth
  1. We provide a landmark detector (tensorflow model) to extract 68 facial landmarks for loss computation. The detector is trained on 300WLP, LFW, and LS3D-W datasets. Download the trained model using this link (google drive) and organize the file as follows:
Deep3DFaceRecon_pytorch
│
└─── checkpoints
    │
    └─── lm_model
        │
        └─── 68lm_detector.pb

Data preparation

  1. To train a model with custom images,5 facial landmarks of each image are needed in advance for an image pre-alignment process. We recommend using dlib or MTCNN to detect these landmarks. Then, organize all files into the following structure:
Deep3DFaceRecon_pytorch
│
└─── datasets
    │
    └─── <folder_to_training_images>
        │
        └─── *.png/*.jpg
	|
	└─── detections
            |
	    └─── *.txt

The *.txt files contain 5 facial landmarks with a shape of 5x2, and should have the same name with their corresponding images.

  1. Generate 68 landmarks and skin attention mask for images using the following script:
# preprocess training images
python data_preparation.py --img_folder <folder_to_training_images>

# alternatively, you can preprocess multiple image folders simultaneously
python data_preparation.py --img_folder <folder_to_training_images1> <folder_to_training_images2> <folder_to_training_images3>

# preprocess validation images
python data_preparation.py --img_folder <folder_to_validation_images> --mode=val

The script will generate files of landmarks and skin masks, and save them into ./datasets/<folder_to_training_images>. In addition, it also generates a file containing the path of all training data into ./datalist which will then be used in the training script.

Train the face reconstruction network

Run the following script to train a face reconstruction model using the pre-processed data:

# train with single GPU
python train.py --name=<custom_experiment_name> --gpu_ids=0

# train with multiple GPUs
python train.py --name=<custom_experiment_name> --gpu_ids=0,1

# train with other custom settings
python train.py --name=<custom_experiment_name> --gpu_ids=0 --batch_size=32 --n_epochs=20

Training logs and model parameters will be saved into ./checkpoints/<custom_experiment_name>.

By default, the script uses a batchsize of 32 and will train the model with 20 epochs. For reference, the pre-trained model in this repo is trained with the default setting on a image collection of 300k images. A single iteration takes 0.8~0.9s on a single Tesla M40 GPU. The total training process takes around two days.

To use a trained model, see Inference section.

Contact

If you have any questions, please contact the paper authors.

Citation

Please cite the following paper if this model helps your research:

@inproceedings{deng2019accurate,
    title={Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set},
    author={Yu Deng and Jiaolong Yang and Sicheng Xu and Dong Chen and Yunde Jia and Xin Tong},
    booktitle={IEEE Computer Vision and Pattern Recognition Workshops},
    year={2019}
}

The face images on this page are from the public CelebA dataset released by MMLab, CUHK.

Part of the code in this implementation takes CUT as a reference.

Owner
Sicheng Xu
Sicheng Xu
This is the repo for the paper `SumGNN: Multi-typed Drug Interaction Prediction via Efficient Knowledge Graph Summarization'. (published in Bioinformatics'21)

SumGNN: Multi-typed Drug Interaction Prediction via Efficient Knowledge Graph Summarization This is the code for our paper ``SumGNN: Multi-typed Drug

Yue Yu 58 Dec 21, 2022
Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound"

merlot_reserve Code release for "MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound" MERLOT Reserve (in submission) is a mo

Rowan Zellers 92 Dec 11, 2022
a reimplementation of Holistically-Nested Edge Detection in PyTorch

pytorch-hed This is a personal reimplementation of Holistically-Nested Edge Detection [1] using PyTorch. Should you be making use of this work, please

Simon Niklaus 375 Dec 06, 2022
Brain Tumor Detection with Tensorflow Neural Networks.

Brain-Tumor-Detection A convolutional neural network model built with Tensorflow & Keras to detect brain tumor and its different variants. Data of the

404ErrorNotFound 5 Aug 23, 2022
Compute execution plan: A DAG representation of work that you want to get done. Individual nodes of the DAG could be simple python or shell tasks or complex deeply nested parallel branches or embedded DAGs themselves.

Hello from magnus Magnus provides four capabilities for data teams: Compute execution plan: A DAG representation of work that you want to get done. In

12 Feb 08, 2022
Meta Learning for Semi-Supervised Few-Shot Classification

few-shot-ssl-public Code for paper Meta-Learning for Semi-Supervised Few-Shot Classification. [arxiv] Dependencies cv2 numpy pandas python 2.7 / 3.5+

Mengye Ren 501 Jan 08, 2023
Repository for open research on optimizers.

Open Optimizers Repository for open research on optimizers. This is a test in sharing research/exploration as it happens. If you use anything from thi

Ariel Ekgren 6 Jun 24, 2022
Time should be taken seer-iously

TimeSeers seers - (Noun) plural form of seer - A person who foretells future events by or as if by supernatural means TimeSeers is an hierarchical Bay

279 Dec 26, 2022
Python package for dynamic system estimation of time series

PyDSE Toolset for Dynamic System Estimation for time series inspired by DSE. It is in a beta state and only includes ARMA models right now. Documentat

Blue Yonder GmbH 40 Oct 07, 2022
A Pytorch Implementation of ClariNet

ClariNet A Pytorch Implementation of ClariNet (Mel Spectrogram -- Waveform) Requirements PyTorch 0.4.1 & python 3.6 & Librosa Examples Step 1. Downlo

Sungwon Kim 286 Sep 15, 2022
ML-Ensemble – high performance ensemble learning

A Python library for high performance ensemble learning ML-Ensemble combines a Scikit-learn high-level API with a low-level computational graph framew

Sebastian Flennerhag 764 Dec 31, 2022
(SIGIR2020) “Asymmetric Tri-training for Debiasing Missing-Not-At-Random Explicit Feedback’’

Asymmetric Tri-training for Debiasing Missing-Not-At-Random Explicit Feedback About This repository accompanies the real-world experiments conducted i

yuta-saito 19 Dec 01, 2022
A Simple and Versatile Framework for Object Detection and Instance Recognition

SimpleDet - A Simple and Versatile Framework for Object Detection and Instance Recognition Major Features FP16 training for memory saving and up to 2.

TuSimple 3k Dec 12, 2022
RepVGG: Making VGG-style ConvNets Great Again

RepVGG: Making VGG-style ConvNets Great Again (PyTorch) This is a super simple ConvNet architecture that achieves over 80% top-1 accuracy on ImageNet

2.8k Jan 04, 2023
The Turing Change Point Detection Benchmark: An Extensive Benchmark Evaluation of Change Point Detection Algorithms on real-world data

Turing Change Point Detection Benchmark Welcome to the repository for the Turing Change Point Detection Benchmark, a benchmark evaluation of change po

The Alan Turing Institute 85 Dec 28, 2022
Pyramid Scene Parsing Network, CVPR2017.

Pyramid Scene Parsing Network by Hengshuang Zhao, Jianping Shi, Xiaojuan Qi, Xiaogang Wang, Jiaya Jia, details are in project page. Introduction This

Hengshuang Zhao 1.5k Jan 05, 2023
Identify the emotion of multiple speakers in an Audio Segment

MevonAI - Speech Emotion Recognition Identify the emotion of multiple speakers in a Audio Segment Report Bug · Request Feature Try the Demo Here Table

Suyash More 110 Dec 03, 2022
Code for Temporally Abstract Partial Models

Code for Temporally Abstract Partial Models Accompanies the code for the experimental section of the paper: Temporally Abstract Partial Models, Khetar

DeepMind 19 Jul 13, 2022
Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20. model in ONNX

ONNX msg_chn_wacv20 depth completion Python script for performing depth completion from sparse depth and rgb images using the msg_chn_wacv20 model in

Ibai Gorordo 19 Oct 22, 2022
一套完整的微博舆情分析流程代码,包括微博爬虫、LDA主题分析和情感分析。

已经将项目的关键文件上传,包含微博爬虫、LDA主题分析和情感分析三个部分。 1.微博爬虫 实现微博评论爬取和微博用户信息爬取,一天大概十万条。 2.LDA主题分析 实现文档主题抽取,包括数据清洗及分词、主题数的确定(主题一致性和困惑度)和最优主题模型的选择(暴力搜索)。 3.情感分析 实现评论文本的

182 Jan 02, 2023