The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering"

Related tags

Deep LearningPIRender
Overview

Website | ArXiv | Get Start | Video

PIRenderer

The source code of the ICCV2021 paper "PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering" (ICCV2021)

The proposed PIRenderer can synthesis portrait images by intuitively controlling the face motions with fully disentangled 3DMM parameters. This model can be applied to tasks such as:

  • Intuitive Portrait Image Editing

    Intuitive Portrait Image Control

    Pose & Expression Alignment

  • Motion Imitation

    Same & Corss-identity Reenactment

  • Audio-Driven Facial Reenactment

    Audio-Driven Reenactment

News

  • 2021.9.20 Code for PyTorch is available!

Colab Demo

Coming soon

Get Start

1). Installation

Requirements

  • Python 3
  • PyTorch 1.7.1
  • CUDA 10.2

Conda Installation

# 1. Create a conda virtual environment.
conda create -n PIRenderer python=3.6
conda activate PIRenderer
conda install -c pytorch pytorch=1.7.1 torchvision cudatoolkit=10.2

# 2. Install other dependencies
pip install -r requirements.txt

2). Dataset

We train our model using the VoxCeleb. You can download the demo dataset for inference or prepare the dataset for training and testing.

Download the demo dataset

The demo dataset contains all 514 test videos. You can download the dataset with the following code:

./scripts/download_demo_dataset.sh

Or you can choose to download the resources with these links:

Google Driven & BaiDu Driven with extraction passwords ”p9ab“

Then unzip and save the files to ./dataset

Prepare the dataset

  1. The dataset is preprocessed follow the method used in First-Order. You can follow the instructions in their repo to download and crop videos for training and testing.

  2. After obtaining the VoxCeleb videos, we extract 3DMM parameters using Deep3DFaceReconstruction.

    The folder are with format as:

    ${DATASET_ROOT_FOLDER}
    └───path_to_videos
    		└───train
    				└───xxx.mp4
    				└───xxx.mp4
    				...
    		└───test
    				└───xxx.mp4
    				└───xxx.mp4
    				...
    └───path_to_3dmm_coeff
    		└───train
    				└───xxx.mat
    				└───xxx.mat
    				...
    		└───test
    				└───xxx.mat
    				└───xxx.mat
    				...
    
  3. We save the video and 3DMM parameters in a lmdb file. Please run the following code to do this

    python scripts/prepare_vox_lmdb.py \
    --path path_to_videos \
    --coeff_3dmm_path path_to_3dmm_coeff \
    --out path_to_output_dir

3). Training and Inference

Inference

The trained weights can be downloaded by running the following code:

./scripts/download_weights.sh

Or you can choose to download the resources with these links: coming soon. Then save the files to ./result/face

Reenactment

Run the the demo for face reenactment:

python -m torch.distributed.launch --nproc_per_node=1 --master_port 12345 inference.py \
--config ./config/face.yaml \
--name face \
--no_resume \
--output_dir ./vox_result/face_reenactment

The output results are saved at ./vox_result/face_reenactment

Intuitive Control

coming soon

Train

Our model can be trained with the following code

python -m torch.distributed.launch --nproc_per_node=4 --master_port 12345 train.py \
--config ./config/face.yaml \
--name face

Citation

If you find this code is helpful, please cite our paper

@misc{ren2021pirenderer,
      title={PIRenderer: Controllable Portrait Image Generation via Semantic Neural Rendering}, 
      author={Yurui Ren and Ge Li and Yuanqi Chen and Thomas H. Li and Shan Liu},
      year={2021},
      eprint={2109.08379},
      archivePrefix={arXiv},
      primaryClass={cs.CV}
}

Acknowledgement

We build our project base on imaginaire. Some dataset preprocessing methods are derived from video-preprocessing.

Owner
Ren Yurui
Ren Yurui
Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".

Code for our ICCV 2021 Paper "OadTR: Online Action Detection with Transformers".

66 Dec 15, 2022
GrabGpu_py: a scripts for grab gpu when gpu is free

GrabGpu_py a scripts for grab gpu when gpu is free. WaitCondition: gpu_memory

tianyuluan 3 Jun 18, 2022
Discover hidden deepweb pages

DeepWeb Scapper Att: Demo version An simple script to scrappe deepweb to find pages. Will return if any of those exists and will save on a file. You s

Héber Júlio 77 Oct 02, 2022
Plenoxels: Radiance Fields without Neural Networks

Plenoxels: Radiance Fields without Neural Networks Alex Yu*, Sara Fridovich-Keil*, Matthew Tancik, Qinhong Chen, Benjamin Recht, Angjoo Kanazawa UC Be

Sara Fridovich-Keil 81 Dec 25, 2022
The repo contains the code to train and evaluate a system which extracts relations and explanations from dialogue.

The repo contains the code to train and evaluate a system which extracts relations and explanations from dialogue. How do I cite D-REX? For now, cite

Alon Albalak 6 Mar 31, 2022
The repository contains reproducible PyTorch source code of our paper Generative Modeling with Optimal Transport Maps, ICLR 2022.

Generative Modeling with Optimal Transport Maps The repository contains reproducible PyTorch source code of our paper Generative Modeling with Optimal

Litu Rout 30 Dec 22, 2022
YOLOv4 / Scaled-YOLOv4 / YOLO - Neural Networks for Object Detection (Windows and Linux version of Darknet )

Yolo v4, v3 and v2 for Windows and Linux (neural networks for object detection) Paper YOLO v4: https://arxiv.org/abs/2004.10934 Paper Scaled YOLO v4:

Alexey 20.2k Jan 09, 2023
Code for SentiBERT: A Transferable Transformer-Based Architecture for Compositional Sentiment Semantics (ACL'2020).

SentiBERT Code for SentiBERT: A Transferable Transformer-Based Architecture for Compositional Sentiment Semantics (ACL'2020). https://arxiv.org/abs/20

Da Yin 66 Aug 13, 2022
A facial recognition doorbell system using a Raspberry Pi

Facial Recognition Doorbell This project expands on the person-detecting doorbell system to allow it to identify faces, and announce names accordingly

rydercalmdown 22 Apr 15, 2022
This is the repository for paper NEEDLE: Towards Non-invertible Backdoor Attack to Deep Learning Models.

This is the repository for paper NEEDLE: Towards Non-invertible Backdoor Attack to Deep Learning Models.

1 Oct 25, 2021
Unsupervised Real-World Super-Resolution: A Domain Adaptation Perspective

Unofficial pytorch implementation of the paper "Unsupervised Real-World Super-Resolution: A Domain Adaptation Perspective"

16 Nov 21, 2022
PowerGridworld: A Framework for Multi-Agent Reinforcement Learning in Power Systems

PowerGridworld provides users with a lightweight, modular, and customizable framework for creating power-systems-focused, multi-agent Gym environments that readily integrate with existing training fr

National Renewable Energy Laboratory 37 Dec 17, 2022
pytorch implementation of "Contrastive Multiview Coding", "Momentum Contrast for Unsupervised Visual Representation Learning", and "Unsupervised Feature Learning via Non-Parametric Instance-level Discrimination"

Unofficial implementation: MoCo: Momentum Contrast for Unsupervised Visual Representation Learning (Paper) InsDis: Unsupervised Feature Learning via N

Zhiqiang Shen 16 Nov 04, 2020
Code for the tech report Toward Training at ImageNet Scale with Differential Privacy

Differentially private Imagenet training Code for the tech report Toward Training at ImageNet Scale with Differential Privacy by Alexey Kurakin, Steve

Google Research 29 Nov 03, 2022
A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval

CLIP4CMR A Comprehensive Empirical Study of Vision-Language Pre-trained Model for Supervised Cross-Modal Retrieval The original data and pre-calculate

24 Dec 26, 2022
571 Dec 25, 2022
Synthetic Humans for Action Recognition, IJCV 2021

SURREACT: Synthetic Humans for Action Recognition from Unseen Viewpoints Gül Varol, Ivan Laptev and Cordelia Schmid, Andrew Zisserman, Synthetic Human

Gul Varol 59 Dec 14, 2022
🔎 Monitor deep learning model training and hardware usage from your mobile phone 📱

Monitor deep learning model training and hardware usage from mobile. 🔥 Features Monitor running experiments from mobile phone (or laptop) Monitor har

labml.ai 1.2k Dec 25, 2022
[CVPR 2019 Oral] Multi-Channel Attention Selection GAN with Cascaded Semantic Guidance for Cross-View Image Translation

SelectionGAN for Guided Image-to-Image Translation CVPR Paper | Extended Paper | Guided-I2I-Translation-Papers Citation If you use this code for your

Hao Tang 424 Dec 02, 2022
The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift

TwoStageAlign The official codes of our CVPR2022 paper: A Differentiable Two-stage Alignment Scheme for Burst Image Reconstruction with Large Shift Pa

Shi Guo 32 Dec 15, 2022