PyTorch Implementation of Daft-Exprt: Robust Prosody Transfer Across Speakers for Expressive Speech Synthesis

Overview

Daft-Exprt: Robust Prosody Transfer Across Speakers for Expressive Speech Synthesis

Julian Zaïdi, Hugo Seuté, Benjamin van Niekerk, Marc-André Carbonneau

In our recent paper we propose Daft-Exprt, a multi-speaker acoustic model advancing the state-of-the-art on inter-speaker and inter-text prosody transfer. This improvement is achieved using FiLM conditioning layers, alongside adversarial training that encourages disentanglement between prosodic information and speaker identity. The acoustic model inherits attractive qualities from FastSpeech 2, such as fast inference and local prosody attributes prediction for finer grained control over generation. Moreover, results indicate that adversarial training effectively discards speaker identity information from the prosody representation, which ensures Daft-Exprt will consistently generate speech with the desired voice.

Experimental results show that Daft-Exprt accurately transfers prosody, while yielding naturalness comparable to state-of-the-art expressive models. Visit our demo page for audio samples related to the paper experiments.

Pre-trained model

Full disclosure: The model provided in this repository is not the same as in the paper evaluation. The model of the paper was trained with proprietary data which prevents us to release it publicly.
We pre-train Daft-Exprt on a combination of LJ speech dataset and the emotional speech dataset (ESD) from Zhou et al.
Visit the releases of this repository to download the pre-trained model and to listen to prosody transfer examples using this same model.

Table of Contents

Installation

Local Environment

Requirements:

We recommend using conda for python environment management, for example download and install Miniconda.
Create your python environment and install dependencies using the Makefile:

  1. conda create -n daft_exprt python=3.8 -y
  2. conda activate daft_exprt
  3. cd environment
  4. make

All Linux/Conda/Python dependencies will be installed by the Makefile, and the repository will be installed as a pip package in editable mode.

Docker Image

Requirements:

Build the Docker image using the associated Dockerfile:

  1. docker build -f environment/Dockerfile -t daft_exprt .

Quick Start Example

Introduction

This quick start guide will illustrate how to use the different scripts of this repository to:

  1. Format datasets
  2. Pre-process these datasets
  3. Train Daft-Exprt on the pre-processed data
  4. Generate a dataset for vocoder fine-tuning
  5. Use Daft-Exprt for TTS synthesis

All scripts are located in scripts directory.
Daft-Exprt source code is located in daft_exprt directory.
Config parameters used in the scripts are all instanciated in hparams.py.

As a quick start example, we consider using the 22kHz LJ speech dataset and the 16kHz emotional speech dataset (ESD) from Zhou et al.
This combines a total of 11 speakers. All speaker datasets must be in the same root directory. For example:

/data_dir
    LJ_Speech
    ESD
        spk_1
        ...
        spk_N

In this example, we use the docker image built in the previous section:

docker run -it --gpus all -v /path/to/data_dir:/workdir/data_dir -v path/to/repo_dir:/workdir/repo_dir IMAGE_ID

Dataset Formatting

The source code expects the specific tree structure for each speaker data set:

/speaker_dir
    metadata.csv
    /wavs
        wav_file_name_1.wav
        ...
        wav_file_name_N.wav

metadata.csv must be formatted as follows:

wav_file_name_1|text_1
...
wav_file_name_N|text_N

Given each dataset has its own nomenclature, this project does not provide a ready-made universal script.
However, the script format_dataset.py already proposes the code to format LJ and ESD:

python format_dataset.py \
    --data_set_dir /workdir/data_dir/LJ_Speech \
    LJ

python format_dataset.py \
    --data_set_dir /workdir/data_dir/ESD \
    ESD \
    --language english

Data Pre-Processing

In this section, the code will:

  1. Align data using MFA
  2. Extract features for training
  3. Create train and validation sets
  4. Extract features stats on the train set for speaker standardization

To pre-process all available formatted data (i.e. LJ and ESD in this example):

python training.py \
    --experiment_name EXPERIMENT_NAME \
    --data_set_dir /workdir/data_dir \
    pre_process

This will pre-process data using the default hyper-parameters that are set for 22kHz audios.
All outputs related to the experiment will be stored in /workdir/repo_dir/trainings/EXPERIMENT_NAME.
You can also target specific speakers for data pre-processing. For example, to consider only ESD speakers:

python training.py \
    --experiment_name EXPERIMENT_NAME \
    --speakers ESD/spk_1 ... ESD/spk_N \
    --data_set_dir /workdir/data_dir \
    pre_process

The pre-process function takes several arguments:

  • --features_dir: absolute path where pre-processed data will be stored. Default to /workdir/repo_dir/datasets
  • --proportion_validation: Proportion of examples that will be in the validation set. Default to 0.1% per speaker.
  • --nb_jobs: number of cores to use for python multi-processing. If set to max, all CPU cores are used. Default to 6.

Note that if it is the first time that you pre-process the data, this step will take several hours.
You can decrease computing time by increasing the --nb_jobs parameter.

Training

Once pre-processing is finished, launch training. To train on all pre-processed data:

python training.py \
    --experiment_name EXPERIMENT_NAME \
    --data_set_dir /workdir/data_dir \
    train

Or if you targeted specific speakers during pre-processing (e.g. ESD speakers):

python training.py \
    --experiment_name EXPERIMENT_NAME \
    --speakers ESD/spk_1 ... ESD/spk_N \
    --data_set_dir /workdir/data_dir \
    train

All outputs related to the experiment will be stored in /workdir/repo_dir/trainings/EXPERIMENT_NAME.

The train function takes several arguments:

  • --checkpoint: absolute path of a Daft-Exprt checkpoint. Default to ""
  • --no_multiprocessing_distributed: disable PyTorch multi-processing distributed training. Default to False
  • --world_size: number of nodes for distributed training. Default to 1.
  • --rank: node rank for distributed training. Default to 0.
  • --master: url used to set up distributed training. Default to tcp://localhost:54321.

These default values will launch a new training starting at iteration 0, using all available GPUs on the machine.
The code supposes that only 1 GPU is available on the machine.
Default batch size and gradient accumulation hyper-parameters are set to values to reproduce the batch size of 48 from the paper.

The code also supports tensorboard logging. To display logging outputs:
tensorboard --logdir_spec=EXPERIMENT_NAME:/workdir/repo_dir/trainings/EXPERIMENT_NAME/logs

Vocoder Fine-Tuning

Once training is finished, you can create a dataset for vocoder fine-tuning:

python training.py \
    --experiment_name EXPERIMENT_NAME \
    --data_set_dir /workdir/data_dir \
    fine_tune \
    --checkpoint CHECKPOINT_PATH

Or if you targeted specific speakers during pre-processing and training (e.g. ESD speakers):

python training.py \
    --experiment_name EXPERIMENT_NAME \
    --speakers ESD/spk_1 ... ESD/spk_N \
    --data_set_dir /workdir/data_dir \
    fine_tune \
    --checkpoint CHECKPOINT_PATH

Fine-tuning dataset will be stored in /workdir/repo_dir/trainings/EXPERIMENT_NAME/fine_tuning_dataset.

TTS Synthesis

For an example on how to use Daft-Exprt for TTS synthesis, run the script synthesize.py.

python synthesize.py \
    --output_dir OUTPUT_DIR \
    --checkpoint CHECKPOINT

Default sentences and reference utterances are used in the script.

The script also offers the possibility to:

  • --batch_size: process batch of sentences in parallel
  • --real_time_factor: estimate Daft-Exprt real time factor performance given the chosen batch size
  • --control: perform local prosody control

Citation

@article{Zaidi2021,
abstract = {},
journal = {arXiv},
arxivId = {2108.02271},
author = {Za{\"{i}}di, Julian and Seut{\'{e}}, Hugo and van Niekerk, Benjamin and Carbonneau, Marc-Andr{\'{e}}},
eprint = {2108.02271},
title = {{Daft-Exprt: Robust Prosody Transfer Across Speakers for Expressive Speech Synthesis}},
url = {https://arxiv.org/pdf/2108.02271.pdf},
year = {2021}
}

Contributing

Any contribution to this repository is more than welcome!
If you have any feedback, please send it to [email protected].

© [2021] Ubisoft Entertainment. All Rights Reserved

Comments
  • Error while running Pretrained model

    Error while running Pretrained model

    Hi @julianzaidi, I pointed to that file in checkpoint argument (archive/data.pkl) but got an unpickle error. If you could tell how to run this pretrained model, it would be so kind of you.

    python synthesize.py --output_dir OUTPUT_DIR --checkpoint "archive/data.pkl"

    Traceback (most recent call last): File "synthesize.py", line 148, in file_names, refs, speaker_ids = synthesize(args, use_griffin_lim=True)

    File "synthesize.py", line 38, in synthesize checkpoint_dict = torch.load(args.checkpoint, map_location=f'cuda:{0}')

    File "/home/saomya/miniconda3/envs/daft_exprt/lib/python3.8/site-packages/torch/serialization.py", line 608, in load return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)

    File "/home/saomya/miniconda3/envs/daft_exprt/lib/python3.8/site-packages/torch/serialization.py", line 777, in _legacy_load magic_number = pickle_module.load(f, **pickle_load_args)

    _pickle.UnpicklingError: A load persistent id instruction was encountered, but no persistent_load function was specified. Screenshot from 2022-10-26 15-19-43

    opened by anushvst 12
  • ldd version

    ldd version

    Hi, when I run the python training.py pre_process, it prompts Exception: REAPER binary -- Unsupported ldd version: 2.27 < 2.29. However, my machine could not update the glibc version. Are there any alternatives? Thanks! image

    opened by inconnu11 3
  • How to run the Pre-trained model

    How to run the Pre-trained model

    Hi @julianzaidi, we tried to run your pre-trained model. However, we are unable to get clarification on the values of the parameters that we need to pass, for instance, specific checkpoints. Also, we received the CUDA out of memory issues too. We would like to run the pre-trained model in Windows instead of Linux. How could we do this?

    opened by saomya-seasia 2
  • Automatic aligner like in FastPitch?

    Automatic aligner like in FastPitch?

    Hello! Do you think it's possible to incorporate automatic aligner as in FastPitch (https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/SpeechSynthesis/FastPitch), as described in paper "One TTS Alignment To Rule Them All"? This aligner essentially only requires graphemes or phonemes and learns with the rest of the network. It would allow to omit Montreal Forced Aligner preprocessing and decrease preprocessing time. If it's possible, what should be changed to allow the use of such an aligner?

    opened by juliakorovsky 2
  • Position-dependent prosody transfer result

    Position-dependent prosody transfer result

    It seem to obtain position-dependent prosody transfer result by utterance-level embedding. Why location information could be embedded in the representation obtained by mean pool operation?

    opened by hyzhan 2
  • np.frombuffer

    np.frombuffer

    Hi, when I extract the f0 using the reaper , it shows the error "ValueError: buffer size must be a multiple of element size ". Could you please help me out?

    opened by inconnu11 0
  • Able to train on LJ & ESD dataset but error in training the model on custom dataset

    Able to train on LJ & ESD dataset but error in training the model on custom dataset

    Hi @julianzaidi @macarbonneau, hope you guys are doing well. Just want to ask few queries regarding the training aspect.

    • I tried to train the model on my voice

    • Formatted the dataset successfully

    • In pre_processing step, got the error: ValueError: zero-size array to reduction operation minimum which has no identity

    • Created directories in this format: work_dir/data_dir/LJ_Speech/wavs

    • In wavs folder i gave around 10 audio clips around 2-3 minute length

    • Prepared the metadata according to the instructions in the repository

    • Should we use short audio clips to train the model?

    Any suggestion regarding this will be very kind of you.

    opened by anushvst 0
  • Problems regarding pretrained model of the daft exprt model

    Problems regarding pretrained model of the daft exprt model

    Hi @julianzaidi @macarbonneau, hope you guys are doing well. Just want to ask few queries regarding the model.

    • I want to use the model such that it can generate audio in a Hip Hop music artist's voice (he passed away few years ago) giving a certain prosody in reference voice and lyrics in the text.

    • Curious about the answers to these questions as i am trying to get some audio clips > than 30 seconds

    When i run the pretrained model giving reference voice and text, it sounds robotic/unnatural.

    • I gave my reference voice (24 sec)

    • Text: "Hello John, my name is Don with marketing dot com and I actually just recently came across micro soft and I thought there were some interesting things that we might be able to do together. Um, we do a lot of work in retail and I'm actually coming to New York next week for a conference. So, if you're around I would love to meet with you, buy you a cup of coffee and tell you a little bit more about what we're thinking that we can do for you. Alright, hope to see you soon."

    • got this output

    https://user-images.githubusercontent.com/92500349/201936746-6b7760a1-fbca-465a-ab27-96ae648564a8.mp4

    1. Also in the ouput voice, it generated robotic or un-natural voice till 18 seconds. After that the model generated distorted voice. Any idea about the distortion?

    2. Should we give the model short reference voice and text?

    3. Can the model produce the output voice greater than 1 minute or it produces short voice?

    4. Is punctuation necessary? also will it work if we give "7" instead of "seven" in text file?

    5. Want to clarify whose voice the model produces in the output: the reference speaker voice or the model's voice on which it is trained (LJ, ESD)?

    6. I am still getting the unnatural (but better than previous) voice after training on the LJ dataset. Any tips how to get the natural voice output?

    • Reference voice - LJ's voice
    • Text: Hello John, my name is Don with marketing dot com. I actually just recently came across microsoft.
    • The output i got was:

    https://user-images.githubusercontent.com/92500349/202087380-1858ecab-b32f-4db7-9021-885a185222e0.mp4

    • Is it because the model arcitecture used in generating audios in demo page is different than the model architecture present in the repository?
    • Any methods to reduce noise in the output voice?
    opened by anushvst 0
Releases(1.0.0)
  • 1.0.0(Sep 10, 2021)

    Release contents:

    • Daft-Exprt model pre-trained on LJ Speech Dataset and the Emotional Speech Dataset from Zhou et al.
    • Prosody transfer examples synthesized using this pre-trained model and Griffin-Lim algorithm

    Full disclosure: The model provided in this release is not the same as in the paper evaluation. The model of the paper was trained with proprietary data which prevents us to release it publicly.

    Source code(tar.gz)
    Source code(zip)
    DaftExprt_LJ_ESD_22kHz(168.73 MB)
    demo.zip(13.51 MB)
Owner
Ubisoft
Ubisoft open source projects.
Ubisoft
A collection of implementations of deep domain adaptation algorithms

Deep Transfer Learning on PyTorch This is a PyTorch library for deep transfer learning. We divide the code into two aspects: Single-source Unsupervise

Yongchun Zhu 647 Jan 03, 2023
LRBoost is a scikit-learn compatible approach to performing linear residual based stacking/boosting.

LRBoost is a sckit-learn compatible package for linear residual boosting. LRBoost combines a linear estimator and a non-linear estimator to leverage t

Andrew Patton 5 Nov 23, 2022
Planner_backend - Academic planner application designed for students and counselors.

Planner (backend) Academic planner application designed for students and advisors.

2 Dec 31, 2021
Scikit-learn compatible estimation of general graphical models

skggm : Gaussian graphical models using the scikit-learn API In the last decade, learning networks that encode conditional independence relationships

213 Jan 02, 2023
A Haskell kernel for IPython.

IHaskell You can now try IHaskell directly in your browser at CoCalc or mybinder.org. Alternatively, watch a talk and demo showing off IHaskell featur

Andrew Gibiansky 2.4k Dec 29, 2022
This project is based on RIFE and aims to make RIFE more practical for users by adding various features and design new models

CPM 项目描述 CPM(Chinese Pretrained Models)模型是北京智源人工智能研究院和清华大学发布的中文大规模预训练模型。官方发布了三种规模的模型,参数量分别为109M、334M、2.6B,用户需申请与通过审核,方可下载。 由于原项目需要考虑大模型的训练和使用,需要安装较为复杂

hzwer 190 Jan 08, 2023
NeurIPS workshop paper 'Counter-Strike Deathmatch with Large-Scale Behavioural Cloning'

Counter-Strike Deathmatch with Large-Scale Behavioural Cloning Tim Pearce, Jun Zhu Offline RL workshop, NeurIPS 2021 Paper: https://arxiv.org/abs/2104

Tim Pearce 169 Dec 26, 2022
DWIPrep is a robust and easy-to-use pipeline for preprocessing of diverse dMRI data.

DWIPrep: A Robust Preprocessing Pipeline for dMRI Data DWIPrep is a robust and easy-to-use pipeline for preprocessing of diverse dMRI data. The transp

Gal Ben-Zvi 1 Jan 09, 2023
💃 VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena

💃 VALSE: A Task-Independent Benchmark for Vision and Language Models Centered on Linguistic Phenomena.

Heidelberg-NLP 17 Nov 07, 2022
A pure PyTorch batched computation implementation of "CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition"

A pure PyTorch batched computation implementation of "CIF: Continuous Integrate-and-Fire for End-to-End Speech Recognition"

張致強 14 Dec 02, 2022
Code repository for EMNLP 2021 paper 'Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods'

Adversarial Attacks on Knowledge Graph Embeddings via Instance Attribution Methods This is the code repository to accompany the EMNLP 2021 paper on ad

Peru Bhardwaj 7 Sep 25, 2022
Demo notebooks for Qiskit application modules demo sessions (Oct 8 & 15):

qiskit-application-modules-demo-sessions This repo hosts demo notebooks for the Qiskit application modules demo sessions hosted on Qiskit YouTube. Par

Qiskit Community 46 Nov 24, 2022
机器学习、深度学习、自然语言处理等人工智能基础知识总结。

说明 机器学习、深度学习、自然语言处理基础知识总结。 目前主要参考李航老师的《统计学习方法》一书,也有一些内容例如XGBoost、聚类、深度学习相关内容、NLP相关内容等是书中未提及的。

Peter 445 Dec 12, 2022
[IEEE Transactions on Computational Imaging] Self-Gated Memory Recurrent Network for Efficient Scalable HDR Deghosting

Few-shot Deep HDR Deghosting This repository contains code and pretrained models for our paper: Self-Gated Memory Recurrent Network for Efficient Scal

Susmit Agrawal 4 Dec 29, 2021
Easy way to add GoogleMaps to Flask applications. maintainer: @getcake

Flask Google Maps Easy to use Google Maps in your Flask application requires Jinja Flask A google api key get here Contribute To contribute with the p

Flask Extensions 611 Dec 05, 2022
StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators

StyleGAN-NADA: CLIP-Guided Domain Adaptation of Image Generators [Project Website] [Replicate.ai Project] StyleGAN-NADA: CLIP-Guided Domain Adaptation

992 Dec 30, 2022
Analyzes your GitHub Profile and presents you with a report on how likely you are to become the next MLH Fellow!

Fellowship Prediction GitHub Profile Comparative Analysis Tool Built with BentoML Table of Contents: Features Disclaimer Technologies Used Contributin

Damir Temir 51 Dec 29, 2022
Code for the paper "Implicit Representations of Meaning in Neural Language Models"

Implicit Representations of Meaning in Neural Language Models Preliminaries Create and set up a conda environment as follows: conda create -n state-pr

Belinda Li 39 Nov 03, 2022
A deep learning network built with TensorFlow and Keras to classify gender and estimate age.

Convolutional Neural Network (CNN). This repository contains a source code of a deep learning network built with TensorFlow and Keras to classify gend

Pawel Dziemiach 1 Dec 18, 2021
End-to-End Speech Processing Toolkit

ESPnet: end-to-end speech processing toolkit system/pytorch ver. 1.3.1 1.4.0 1.5.1 1.6.0 1.7.1 1.8.1 1.9.0 ubuntu20/python3.9/pip ubuntu20/python3.8/p

ESPnet 5.9k Jan 04, 2023