SC-GlowTTS: an Efficient Zero-Shot Multi-Speaker Text-To-Speech Model
Edresson Casanova, Christopher Shulby, Eren Gölge, Nicolas Michael Müller, Frederico Santos de Oliveira, Arnaldo Candido Junior, Anderson da Silva Soares, Sandra Maria Aluisio, Moacir Antonelli Ponti
In our recent paper we propose SC-GlowTTS: an efficient zero-shot multi-speaker text-to-speech model that improves similarity for speakers unseen in training. We propose a speaker conditional architecture that explores a flow-based decoder that can work in a zero-shot scenario. As text encoders, we explored a dilated residual convolutional-based encoder, gated convolutional-based encoder, and transformer-based encoder. Additionally, we have shown that adjusting a GAN-based vocoder for the spectrograms predicted by the TTS model on the training dataset can significantly improve the similarity and speech quality for new speakers. We showed that our model can converge in training, using only 11 speakers, reaching state-of-the-art results for similarity with new speakers and speech quality.
Audios samples
Visit our website for audio samples.
Implementation
All of our experiments were implemented at Coqui TTS.
Checkpoints
Model | URL |
---|---|
Speaker Encoder by @mueller91 | link |
Tacotron 2 | link |
SC-GlowTTS-Trans | link |
SC-GlowTTS-Res | link |
SC-GlowTTS-Gated | link |
SC-GlowTTS-Trans 11 speakers | link |
HiFi-GAN | link |
All checkpoints | link |
Colab demos
SC-GlowTTS-Trans trained with 11 speakers