Occlusion robust 3D face reconstruction model in CFR-GAN (WACV 2022)

Overview

Occlusion Robust 3D face Reconstruction

Yeong-Joon Ju, Gun-Hee Lee, Jung-Ho Hong, and Seong-Whan Lee

Code for Occlusion Robust 3D Face Reconstruction in "Complete Face Recovery GAN: Unsupervised Joint Face Rotation and De-Occlusion from a Single-View Image (WACV 2022)"

We propose our novel two stage fine-tuning strategy for occlusion-robust 3D face reconstruction. The training method is split into two training stages due to the difficulty of initial training for extreme occlusions. We fine-tune the baseline with our newly created datasets in the first stage and with teacher-student learning method in the second stage.

Our baseline is Accurate 3D Face Reconstruction with Weakly-Supervised Learning: From Single Image to Image Set and we also referred this code. Note that we focus on alignments and colors for guidance of CFR-GAN in occluded facial images.

Requirements


Usage


Preprocessing:

Prepare your own dataset for data augmentation. The datasets used in this paper can be downloaded in follows:

Except when the dataset has facial landmarks labels, you should predict facial landmarks. We recommend using 3DDFA v2. If you want to reduce error propagation of the facial alignment networks, prepend a flag to filename. (ex) "pred"+[filename])

In order to train occlusion-robust 3D face model, occluded face image datasets are essential, but they are absent. So, we create datasets by synthesizing the hand-shape mask.

python create_train_stage1.py --img_path [your image folder] --lmk_path [your landmarks folder] --save_path [path to save]

For first training stage, prepare occluded (augmented images), ori_img (original images), landmarks (3D landmarks) folders or modify folder name in train_stage1.py.

**You must align images with align.py**

meta file format is:

[filename] left eye x left eye y right eye x right eye y nose x nose y left mouth x left mouth y ...

You can use MTCNN or RetinaFace

First Fine-tuning Stage:

Instead of skin mask, we use BiseNet, face parsing network. The codes and weights were modified and re-trained from this code.

Train occlusion-robust 3D face model

python train_stage1.py

To show logs

tensorboard --logdir=logs_stage1 --bind_all --reload_multifile True

Second Fine-tuning Stage:

  • You can download MaskedFaceNet dataset in here.
  • You can download FFHQ dataset in here.

Train

python train_stage2.py

To show logs

tensorboard --logdir=logs_stage2 --bind_all --reload_multifile True

Evaluation

python evaluation/benchmark_nme_aflw_2000.py

If you would like to evaluate your results, please refer evaluation/estimate_aflw2000.py

Owner
Yeongjoon
Yeongjoon
😊 Python module for face feature changing

PyWarping Python module for face feature changing Installation pip install pywarping If you get an error: No such file or directory: 'cmake': 'cmake',

Dopevog 10 Sep 10, 2021
This is the official PyTorch implementation for "Mesa: A Memory-saving Training Framework for Transformers".

Mesa: A Memory-saving Training Framework for Transformers This is the official PyTorch implementation for Mesa: A Memory-saving Training Framework for

Zhuang AI Group 105 Dec 06, 2022
Does Oversizing Improve Prosumer Profitability in a Flexibility Market? - A Sensitivity Analysis using PV-battery System

Does Oversizing Improve Prosumer Profitability in a Flexibility Market? - A Sensitivity Analysis using PV-battery System The possibilities to involve

Babu Kumaran Nalini 0 Nov 19, 2021
ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab

AliceMind AliceMind: ALIbaba's Collection of Encoder-decoders from MinD (Machine IntelligeNce of Damo) Lab This repository provides pre-trained encode

Alibaba 1.4k Jan 01, 2023
Code for testing convergence rates of Lipschitz learning on graphs

πŸ“ˆ LipschitzLearningRates The code in this repository reproduces the experimental results on convergence rates for k-nearest neighbor graph infinity L

2 Dec 20, 2021
ARAE-Tensorflow for Discrete Sequences (Adversarially Regularized Autoencoder)

ARAE Tensorflow Code Code for the paper Adversarially Regularized Autoencoders for Generating Discrete Structures by Zhao, Kim, Zhang, Rush and LeCun

19 Nov 12, 2021
TorchX is a library containing standard DSLs for authoring and running PyTorch related components for an E2E production ML pipeline.

TorchX is a library containing standard DSLs for authoring and running PyTorch related components for an E2E production ML pipeline

193 Dec 22, 2022
ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin et al., 2020).

ReConsider ReConsider is a re-ranking model that re-ranks the top-K (passage, answer-span) predictions of an Open-Domain QA Model like DPR (Karpukhin

Facebook Research 47 Jul 26, 2022
A semantic segmentation toolbox based on PyTorch

Introduction vedaseg is an open source semantic segmentation toolbox based on PyTorch. Features Modular Design We decompose the semantic segmentation

407 Dec 15, 2022
Using Tensorflow Object Detection API to detect Waymo open dataset

Waymo-2D-Object-Detection Using Tensorflow Object Detection API to detect Waymo open dataset Result CenterNet Training Loss SSD ResNet Training Loss C

76 Dec 12, 2022
Automatic Calibration for Non-repetitive Scanning Solid-State LiDAR and Camera Systems

ACSC Automatic extrinsic calibration for non-repetitive scanning solid-state LiDAR and camera systems. System Architecture 1. Dependency Tested with U

KINO 192 Dec 13, 2022
Efficient and Scalable Physics-Informed Deep Learning and Scientific Machine Learning on top of Tensorflow for multi-worker distributed computing

Notice: Support for Python 3.6 will be dropped in v.0.2.1, please plan accordingly! Efficient and Scalable Physics-Informed Deep Learning Collocation-

tensordiffeq 74 Dec 09, 2022
An example project demonstrating how the Autonomous Learning Library can be used to build new reinforcement learning agents.

About This repository shows how Autonomous Learning Library can be used to build new reinforcement learning agents. In particular, it contains a model

Chris Nota 5 Aug 30, 2022
A hybrid framework (neural mass model + ML) for SC-to-FC prediction

The current workflow simulates brain functional connectivity (FC) from structural connectivity (SC) with a neural mass model. Gradient descent is applied to optimize the parameters in the neural mass

Yilin Liu 1 Jan 26, 2022
PIXIE: Collaborative Regression of Expressive Bodies

PIXIE: Collaborative Regression of Expressive Bodies [Project Page] This is the official Pytorch implementation of PIXIE. PIXIE reconstructs an expres

Yao Feng 331 Jan 04, 2023
Repository for the paper "From global to local MDI variable importances for random forests and when they are Shapley values"

From global to local MDI variable importances for random forests and when they are Shapley values Antonio Sutera ( Antonio Sutera 3 Feb 23, 2022

The King is Naked: on the Notion of Robustness for Natural Language Processing

the-king-is-naked: on the notion of robustness for natural language processing AAAI2022 DISCLAIMER:This repo will be updated soon with instructions on

Iperboreo_ 1 Nov 24, 2022
NumPy둜 κ΅¬ν˜„ν•œ λ”₯λŸ¬λ‹ λΌμ΄λΈŒλŸ¬λ¦¬μž…λ‹ˆλ‹€. (μžλ™ λ―ΈλΆ„ 지원)

Deep Learning Library only using NumPy λ³Έ λ ˆν¬μ§€ν† λ¦¬λŠ” NumPy 만으둜 κ΅¬ν˜„ν•œ λ”₯λŸ¬λ‹ λΌμ΄λΈŒλŸ¬λ¦¬μž…λ‹ˆλ‹€. μžλ™ 미뢄이 κ΅¬ν˜„λ˜μ–΄ μžˆμŠ΅λ‹ˆλ‹€. μžλ™ λ―ΈλΆ„ μžλ™ 미뢄은 미뢄을 μžλ™μœΌλ‘œ κ³„μ‚°ν•΄μ£ΌλŠ” κΈ°λŠ₯μž…λ‹ˆλ‹€. μ•„λž˜ μ½”λ“œλŠ” μžλ™ 미뢄을 ν™œμš©ν•΄ μ—­μ „νŒŒ

쑰쀀희 17 Aug 16, 2022
Pose Transformers: Human Motion Prediction with Non-Autoregressive Transformers

Pose Transformers: Human Motion Prediction with Non-Autoregressive Transformers This is the repo used for human motion prediction with non-autoregress

Idiap Research Institute 26 Dec 14, 2022
Learning hierarchical attention for weakly-supervised chest X-ray abnormality localization and diagnosis

Hierarchical Attention Mining (HAM) for weakly-supervised abnormality localization This is the official PyTorch implementation for the HAM method. Pap

Xi Ouyang 22 Jan 02, 2023