Official code release for "Learned Spatial Representations for Few-shot Talking-Head Synthesis" ICCV 2021

Related tags

Deep Learninglsr
Overview

LSR: Learned Spatial Representations for Few-shot Talking-Head Synthesis

Official code release for LSR. For technical details, please refer to:

Learned Spatial Representations for Few-shot Talking Head Synthesis.
Moustafa Meshry, Saksham Suri, Larry S. Davis, Abhinav Shrivastava
In International Conference on Computer Vision (ICCV), 2021.

Paper | Project page | Video

If you find this code useful, please consider citing:

@inproceedings{meshry2021step,
  title = {Learned Spatial Representations for Few-shot Talking-Head Synthesis},
  author = {Meshry, Moustafa and
          Suri, Saksham and
          Davis, Larry S. and
          Shrivastava, Abhinav},
  booktitle = {Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV),
  year = {2021}
}

Environment setup

The code was built using tensorflow 2.2.0, cuda 10.1.243, and cudnn v7.6.5, but should be compatible with more recent tensorflow releases and cuda versions. To set up a virtual environement for the code, follow the following instructions.

  • Create a new conda environment
conda create -n lsr python=3.6
  • Activate the lsr environment
conda activate lsr
  • Set up the prerequisites
pip install -r requirements.txt

Run a pre-trained model

  • Download our pretrained model and extract to ./_trained_models/meta_learning
  • To run the inference for a test identity, execute the following command:
python main.py \
    --train_dir=_trained_models/meta_learning \
    --run_mode=infer \
    --K=1 \
    --source_subject_dir=_datasets/sample_fsth_eval_subset_processed/train/id00017/OLguY5ofUrY/combined \
    --driver_subject_dir=_datasets/sample_fsth_eval_subset_processed/test/id00017/OLguY5ofUrY/combined \
    --few_shot_finetuning=false 

where --K specifies the number of few-shot inputs, --few_shot_finetuning specifies whether or not to fine-tune the meta-learned model using the the K-shot inputs, and --source_subject_dir and --driver_subject_dir specify the source identity and driver sequence data respectively. Each output image contains a tuple of 5 images represeting the following (concatenated along the width):

  • The input facial landmarks for the target view.
  • The output discrete layout of our model, visualized in RGB.
  • The oracle segmentation map using an off-the-shelf segmentation model (i.e. the pesuedo ground truth), visualized in RGB.
  • The final output of our model.
  • The ground truth image of the driver subject.

A sample tuple is shown below.

        Input landmarks             Output spatial map           Oracle segmentation                     Output                           Ground truth


Test data and pre-computed outupts

Our model is trained on the train split of the VoxCeleb2 dataset. The data used for evaluation is adopted from the "Few-Shot Adversarial Learning of Realistic Neural Talking Head Models" paper (Zakharov et. al, 2019), and can be downloaded from the link provided by the authors of the aforementioned paper.

The test data contains 1600 images of 50 test identities (not seen by the model during training). Each identity has 32 input frames + 32 hold-out frames. The K-shot inputs to the model are uniformly sampled from the 32 input set. If the subject finetuning is turned on, then the model is finetuned on the K-shot inputs. The 32 hold-out frames are never shown to the finetuned model. For more details about the test data, refer to the aforementioned paper (and our paper). To facilitate comparison to our method, we provide a link with our pre-computed outputs of the test subset for K={1, 4, 8, 32} and for both the subject-agnostic (meta-learned) and subject-finetuned models. For more details, please refer to the README file associated with the released outputs. Alternatively, you can run our pre-trained model on your own data or re-train our model by following the instructions for training, inference and dataset preparation.

Dataset pre-processing

The dataset preprocessing has the following steps:

  1. Facial landmark generation
  2. Face parsing
  3. Converting the VoxCeleb2 dataset to tfrecords (for training).

We provide details for each of these steps.

Facial Landmark Generation

  1. data_dir: Path to a directory containing data to be processed.
  2. output_dir: Path to the output directory where the processed data should be saved.
  3. k: Sampling rate for frames from video (Default is set to 10)
  4. mode: The mode can be set to images or videos depending on whether the input data is video files or already extracted frames.

Here are example commands that process the sample data provided with this repository:

Note: Make sure the folders only contain the videos or images that are to be processed.

  • Generate facial landmarks for sample VoxCeleb2 test videos.
python preprocessing/landmarks/release_landmark.py \
    --data_dir=_datasets/sample_test_videos \
    --output_dir=_datasets/sample_test_videos_processed \
    --mode=videos

To process the full dev and test subsets of the VoxCeleb2 dataset, run the above command twice while setting the --data_dir to point to the downloaded dev and test splits respectively.

  • Generate facial landmarks for the train portion of the sample evaluation subset.
python preprocessing/landmarks/release_landmark.py \
    --data_dir=_datasets/sample_fsth_eval_subset/train \
    --output_dir=_datasets/sample_fsth_eval_subset_processed/train \
    --mode=images
  • Generate facial landmarks for the test portion of the sample evaluation subset.
python preprocessing/landmarks/release_landmark.py \
    --data_dir=_datasets/sample_fsth_eval_subset/test \
    --output_dir=_datasets/sample_fsth_eval_subset_processed/test \
    --mode images

To process the full evaluation subset, download the evaluation subset, and run the above commands on the train and test portions of it.

Facial Parsing

The facial parsing step generates the oracle segmentation maps. It uses face parser of the CelebAMask-HQ github repository

To set it up follow the instructions below, and refer to instructions in the CelebAMask-HQ github repository for guidance.

mkdir third_party
git clone https://github.com/switchablenorms/CelebAMask-HQ.git third_party
cp preprocessing/segmentation/* third_party/face_parsing/.

To process the sample data provided with this repository, run the following commands.

  • Generate oracle segmentations for sample VoxCeleb2 videos.
python -u third_party/face_parsing/generate_oracle_segmentations.py \
    --batch_size=1 \
    --test_image_path=_datasets/sample_test_videos_processed
  • Generate oracle segmentations for the train portion of the sample evaluation subset.
python -u third_party/face_parsing/generate_oracle_segmentations.py \
    --batch_size=1 \
    --test_image_path=_datasets/sample_fsth_eval_subset_processed/train
  • Generate oracle segmentations for the test portion of the sample evaluation subset.
python -u third_party/face_parsing/generate_oracle_segmentations.py \
    --batch_size=1 \
    --test_image_path=_datasets/sample_fsth_eval_subset_processed/test

Converting VoxCeleb2 to tfrecords.

To re-train our model, you'll need to export the VoxCeleb2 dataset to a TF-record format. After downloading the VoxCeleb2 dataset and generating the facial landmarks and segmentations for it, run the following commands to export them to tfrecods.

python data/export_voxceleb_to_tfrecords.py \
  --dataset_parent_dir=
   
     \
  --output_parent_dir=
    
      \
  --subset=dev \
  --num_shards=1000

    
   

For example, the command to convert the sample data provided with this repository is

python data/export_voxceleb_to_tfrecords.py \
  --dataset_parent_dir=_datasets/sample_fsth_eval_subset_processed \
  --output_parent_dir=_datasets/sample_fsth_eval_subset_processed/tfrecords \
  --subset=test \
  --num_shards=1

Training

Training consists of two stages: first, we bootstrap the training of the layout generator by training it to predict a segmentation map for the target view. Second, we turn off the semantic segmentation loss and train our full pipeline. Our code assumes the training data in a tfrecord format (see previous instructions for dataset preparation).

After you have generated the dev and test tfrecords of the VoxCeleb2 dataset, you can run the training as follows:

  • run the layout pre-training step: execute the following command
sh scripts/train_lsr_pretrain.sh
  • train the full pipeline: after the pre-training is complete, run the following command
sh scripts/train_lsr_meta_learning.sh

Please, refer to the training scripts for details about different training configurations and how to set the correct flags for your training data.

Owner
Moustafa Meshry
Moustafa Meshry
CARLA: A Python Library to Benchmark Algorithmic Recourse and Counterfactual Explanation Algorithms

CARLA - Counterfactual And Recourse Library CARLA is a python library to benchmark counterfactual explanation and recourse models. It comes out-of-the

Carla Recourse 200 Dec 28, 2022
Collects many various multi-modal transformer architectures, including image transformer, video transformer, image-language transformer, video-language transformer and related datasets

The repository collects many various multi-modal transformer architectures, including image transformer, video transformer, image-language transformer, video-language transformer and related datasets

Jun Chen 139 Dec 21, 2022
Code related to the manuscript "Averting A Crisis In Simulation-Based Inference"

Abstract We present extensive empirical evidence showing that current Bayesian simulation-based inference algorithms are inadequate for the falsificat

Montefiore Artificial Intelligence Research 3 Nov 14, 2022
Collection of TensorFlow2 implementations of Generative Adversarial Network varieties presented in research papers.

TensorFlow2-GAN Collection of tf2.0 implementations of Generative Adversarial Network varieties presented in research papers. Model architectures will

41 Apr 28, 2022
An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge.

Bottom-Up and Top-Down Attention for Visual Question Answering An efficient PyTorch implementation of the winning entry of the 2017 VQA Challenge. The

Hengyuan Hu 731 Jan 03, 2023
Code for visualizing the loss landscape of neural nets

Visualizing the Loss Landscape of Neural Nets This repository contains the PyTorch code for the paper Hao Li, Zheng Xu, Gavin Taylor, Christoph Studer

Tom Goldstein 2.2k Jan 09, 2023
Repo for WWW 2022 paper: Progressively Optimized Bi-Granular Document Representation for Scalable Embedding Based Retrieval

BiDR Repo for WWW 2022 paper: Progressively Optimized Bi-Granular Document Representation for Scalable Embedding Based Retrieval. Requirements torch==

Microsoft 11 Oct 20, 2022
Large-scale Hyperspectral Image Clustering Using Contrastive Learning, CIKM 21 Workshop

Spectral-spatial contrastive clustering (SSCC) Yaoming Cai, Yan Liu, Zijia Zhang, Zhihua Cai, and Xiaobo Liu, Large-scale Hyperspectral Image Clusteri

Yaoming Cai 4 Nov 02, 2022
基于Pytorch实现优秀的自然图像分割框架!(包括FCN、U-Net和Deeplab)

语义分割学习实验-基于VOC数据集 usage: 下载VOC数据集,将JPEGImages SegmentationClass两个文件夹放入到data文件夹下。 终端切换到目标目录,运行python train.py -h查看训练 (torch) Li Xiang 28 Dec 21, 2022

simple_pytorch_example project is a toy example of a python script that instantiates and trains a PyTorch neural network on the FashionMNIST dataset

simple_pytorch_example project is a toy example of a python script that instantiates and trains a PyTorch neural network on the FashionMNIST dataset

Ramón Casero 1 Jan 07, 2022
K-FACE Analysis Project on Pytorch

Installation Setup with Conda # create a new environment conda create --name insightKface python=3.7 # or over conda activate insightKface #install t

Jung Jun Uk 7 Nov 10, 2022
A hybrid framework (neural mass model + ML) for SC-to-FC prediction

The current workflow simulates brain functional connectivity (FC) from structural connectivity (SC) with a neural mass model. Gradient descent is applied to optimize the parameters in the neural mass

Yilin Liu 1 Jan 26, 2022
AntiFuzz: Impeding Fuzzing Audits of Binary Executables

AntiFuzz: Impeding Fuzzing Audits of Binary Executables Get the paper here: https://www.usenix.org/system/files/sec19-guler.pdf Usage: The python scri

Chair for Sys­tems Se­cu­ri­ty 88 Dec 21, 2022
Python and Julia in harmony.

PythonCall & JuliaCall Bringing Python® and Julia together in seamless harmony: Call Python code from Julia and Julia code from Python via a symmetric

Christopher Rowley 414 Jan 07, 2023
Compute execution plan: A DAG representation of work that you want to get done. Individual nodes of the DAG could be simple python or shell tasks or complex deeply nested parallel branches or embedded DAGs themselves.

Hello from magnus Magnus provides four capabilities for data teams: Compute execution plan: A DAG representation of work that you want to get done. In

12 Feb 08, 2022
This repository contains the implementation of the HealthGen model, a generative model to synthesize realistic EHR time series data with missingness

HealthGen: Conditional EHR Time Series Generation This repository contains the implementation of the HealthGen model, a generative model to synthesize

0 Jan 20, 2022
Standalone pre-training recipe with JAX+Flax

Sabertooth Sabertooth is standalone pre-training recipe based on JAX+Flax, with data pipelines implemented in Rust. It runs on CPU, GPU, and/or TPU, b

Nikita Kitaev 26 Nov 28, 2022
MediaPipeのPythonパッケージのサンプルです。2020/12/11時点でPython実装のある4機能(Hands、Pose、Face Mesh、Holistic)について用意しています。

mediapipe-python-sample MediaPipeのPythonパッケージのサンプルです。 2020/12/11時点でPython実装のある以下4機能について用意しています。 Hands Pose Face Mesh Holistic Requirement mediapipe 0.

KazuhitoTakahashi 217 Dec 12, 2022
Run Effective Large Batch Contrastive Learning on Limited Memory GPU

Gradient Cache Gradient Cache is a simple technique for unlimitedly scaling contrastive learning batch far beyond GPU memory constraint. This means tr

Luyu Gao 198 Dec 29, 2022
Open source Python module for computer vision

About PCV PCV is a pure Python library for computer vision based on the book "Programming Computer Vision with Python" by Jan Erik Solem. More details

Jan Erik Solem 1.9k Jan 06, 2023