Low-resource expressive text-to-speech using data augmentation
Authors: Goeric Huybrechts, Thomas Merritt, Giulia Comini, Bartek Perz, Raahil Shah, Jaime Lorenzo-Trueba
While recent neural text-to-speech (TTS) systems perform remarkably well, they typically require a substantial amount of recordings from the target speaker reading in the desired speaking style. In this work, we present a novel 3-step methodology to circumvent the costly operation of recording large amounts of target data in order to build expressive style voices with as little as 15 minutes of such recordings. First, we augment data via voice conversion by leveraging recordings in the desired speaking style from other speakers. Next, we use that synthetic data on top of the available recordings to train a TTS model. Finally, we fine-tune that model to further increase quality. Our evaluations show that the proposed changes bring significant improvements over non-augmented models across many perceived aspects of synthesised speech. We demonstrate the proposed approach on 2 styles (newscaster and conversational), on various speakers, and on both single and multi-speaker models, illustrating the robustness of our approach.
Here you can listen to some randomly chosen (i.e., not cherry-picked) samples in the two different settings detailed in the paper, the single-speaker newscaster setting and the multi-speaker conversational setting.
Single-speaker setting: Newscaster style
These samples are generated with the baseline data-reduced models (DR) and our proposed data-reduced models which leverage voice-converted synthetic data and a fine-tuning pass (DR + VC + FT). The models have been trained on 30 minutes of newscaster recordings.
Multi-speaker setting: Conversational style
These samples are generated with the baseline non-data-reduced models (non-DR) and our proposed data-reduced models which leverage voice-converted synthetic data and a fine-tuning pass (DR + VC + FT). The proposed models have been trained on 30 minutes of conversational recordings, while the baseline non-DR models have been trained on 45 minutes, 1.5 hours and 5 hours.