Almost Unsupervised Text to Speech and Automatic Speech Recognition
Paper: Almost Unsupervised Text to Speech and Automatic Speech Recognition
Authors
- Yi Ren* (Zhejiang University) rayeren613@gmail.com
- Xu Tan* (Microsoft Research) xuta@microsoft.com
- Tao Qin (Microsoft Research) taoqin@microsoft.com
- Sheng Zhao (Microsoft) Sheng.Zhao@microsoft.com
- Zhou Zhao (Zhejiang University) zhaozhou@zju.edu.cn
- Tie-Yan Liu (Microsoft Research) tyliu@microsoft.com
* Equal contribution.
Abstract
Text to speech (TTS) and automatic speech recognition (ASR) are two dual tasks in speech processing and both achieve impressive performance thanks to the recent advance in deep learning and large amount of aligned speech and text data. However, the lack of aligned data poses a major practical problem for TTS and ASR on low-resource languages. In this paper, by leveraging the dual nature of the two tasks, we propose an almost unsupervised learning method that only leverages few hundreds of paired data and extra unpaired data for TTS and ASR. Our method consists of the following components: (1) a denoising auto-encoder, which reconstructs speech and text sequences respectively to develop the capability of language modeling both in speech and text domain; (2) dual transformation, where the TTS model transforms the text $y$ into speech $\hat{x}$, and the ASR model leverages the transformed pair $(\hat{x},y)$ for training, and vice versa, to boost the accuracy of the two tasks; (3) bidirectional sequence modeling, which addresses error propagation especially in the long speech and text sequence when training with few paired data; (4) a unified model structure, which combines all the above components for TTS and ASR based on Transformer model. Our method achieves 99.84% in terms of word level intelligible rate and 2.68 MOS for TTS, and 11.7% PER for ASR on LJSpeech dataset, by leveraging only 200 paired speech and text data (about 20 minutes audio), together with extra unpaired speech and text data.
Audio Samples
has never been surpassed
200-Pairs Only |
---|
Our Method |
especially as no more time is occupied or cost incurred in casting setting or printing beautiful letters
200-Pairs Only |
---|
Our Method |
than in the same operations with ugly ones
200-Pairs Only |
---|
Our Method |
the forms of printed letters should be beautiful and that their arrangement on the page should be reasonable and a help to the shapeliness of the letters themselves
200-Pairs Only |
---|
Our Method |
the first books were printed in black letter ie the letter which was a gothic development of the ancient roman character
200-Pairs Only |
---|
Our Method |
Our Related Works
FastSpeech: Fast, Robust and Controllable Text to Speech
Semi-Supervised Neural Architecture Search
MultiSpeech: Multi-Speaker Text to Speech with Transformer
LRSpeech: Extremely Low-Resource Speech Synthesis and Recognition
DeepSinger: Singing Voice Synthesis with Data Mined From the Web
FastSpeech 2: Fast and High-Quality End-to-End Text-to-Speech
UWSpeech: Speech to Speech Translation for Unwritten Languages
Denoising Text to Speech with Frame-Level Noise Modeling