FastSpeech 2: Fast and High-Quality End-to-End Text to Speech


ArXiv: arXiv:2006.04558

Authors

* Equal contribution.

Abstract

Non-autoregressive text to speech (TTS) models such as FastSpeech~\citep{ren2019fastspeech} can synthesize speech significantly faster than previous autoregressive models with comparable quality. The training of FastSpeech model relies on an autoregressive teacher model for duration prediction (to provide more information as input) and knowledge distillation (to simplify the data distribution in output), which can ease the one-to-many mapping problem (i.e., multiple speech variations correspond to the same text) in TTS. However, FastSpeech has several disadvantages: 1) the teacher-student distillation pipeline is complicated and time-consuming, 2) the duration extracted from the teacher model is not accurate enough, and the target mel-spectrograms distilled from teacher model suffer from information loss due to data simplification, both of which limit the voice quality. In this paper, we propose FastSpeech 2, which addresses the issues in FastSpeech and better solves the one-to-many mapping problem in TTS by 1) directly training the model with ground-truth target instead of the simplified output from teacher, and 2) introducing more variation information of speech (e.g., pitch, energy and more accurate duration) as conditional inputs. Specifically, we extract duration, pitch and energy from speech waveform and directly take them as conditional inputs in training and use predicted values in inference. We further design FastSpeech 2s, which is the first attempt to directly generate speech waveform from text in parallel, enjoying the benefit of fully end-to-end inference. Experimental results show that 1) FastSpeech 2 achieves a 3x training speed-up over FastSpeech, and FastSpeech 2s enjoys even faster inference speed; 2) FastSpeech 2 and 2s outperform FastSpeech in voice quality, and FastSpeech 2 can even surpass autoregressive models.

Audio Samples

All of the audio samples use Parallel WaveGAN (PWG) as vocoder. For all audio samples, the background noise of LJSpeech is reduced using spectral subtraction.

Comparison with Other Models

Were the leaders in this luckless change, though our own Baskerville, who was at work some years before them, went much on the same lines.

GT GT (PWG) Transformer TTS
FastSpeech FastSpeech 2 FastSpeech 2s


And were occupied as a rule by ten to fifteen people when the prison was not crowded, but double the number was occasionally placed in them.

GT GT (PWG) Transformer TTS
FastSpeech FastSpeech 2 FastSpeech 2s


Was used for debtors arrested for the lowest sums within twelve miles of the palace of Whitehall.

GT GT (PWG) Transformer TTS
FastSpeech FastSpeech 2 FastSpeech 2s


Notably as when numbers filled Newgate in anticipation of Lord Redesdale’s bill for insolvent debtors.

GT GT (PWG) Transformer TTS
FastSpeech FastSpeech 2 FastSpeech 2s


Others who arrived just after the time of distribution were often forty-eight hours without food. The latter might also be six days without meat.

GT GT (PWG) Transformer TTS
FastSpeech FastSpeech 2 FastSpeech 2s

Ablation Study

Most of Caxton’s own types are of an earlier character, though they also much resemble Flemish or Cologne letter.

FastSpeech 2 - CWT - Pitch
- Energy - Energy Pitch
FastSpeech 2s - CWT - Pitch
- Energy - Energy Pitch - Mel Decoder


And were occupied as a rule by ten to fifteen people when the prison was not crowded, but double the number was occasionally placed in them.

FastSpeech 2 - CWT - Pitch
- Energy - Energy Pitch
FastSpeech 2s - CWT - Pitch
- Energy - Energy Pitch - Mel Decoder

Almost Unsupervised Text to Speech and Automatic Speech Recognition
FastSpeech: Fast, Robust and Controllable Text to Speech
Semi-Supervised Neural Architecture Search
MultiSpeech: Multi-Speaker Text to Speech with Transformer
LRSpeech: Extremely Low-Resource Speech Synthesis and Recognition
DeepSinger: Singing Voice Synthesis with Data Mined From the Web
UWSpeech: Speech to Speech Translation for Unwritten Languages
Denoising Text to Speech with Frame-Level Noise Modeling