HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Neural Speech Synthesis with Transformer Network

Naihan Li; Shujie Liu; Yanqing Liu; Sheng Zhao; Ming Liu; Ming Zhou

Neural Speech Synthesis with Transformer Network

Abstract

Although end-to-end neural text-to-speech (TTS) methods (such as Tacotron2) are proposed and achieve state-of-the-art performance, they still suffer from two problems: 1) low efficiency during training and inference; 2) hard to model long dependency using current recurrent neural networks (RNNs). Inspired by the success of Transformer network in neural machine translation (NMT), in this paper, we introduce and adapt the multi-head attention mechanism to replace the RNN structures and also the original attention mechanism in Tacotron2. With the help of multi-head self-attention, the hidden states in the encoder and decoder are constructed in parallel, which improves the training efficiency. Meanwhile, any two inputs at different times are connected directly by self-attention mechanism, which solves the long range dependency problem effectively. Using phoneme sequences as input, our Transformer TTS network generates mel spectrograms, followed by a WaveNet vocoder to output the final audio results. Experiments are conducted to test the efficiency and performance of our new network. For the efficiency, our Transformer TTS network can speed up the training about 4.25 times faster compared with Tacotron2. For the performance, rigorous human tests show that our proposed model achieves state-of-the-art performance (outperforms Tacotron2 with a gap of 0.048) and is very close to human quality (4.39 vs 4.44 in MOS).

Code Repositories

Munna-Manoj/Team6_FastSpeech2_TTS
pytorch
Mentioned in GitHub
tartunlp/transformertts
tf
Mentioned in GitHub
as-ideas/TransformerTTS
tf
Mentioned in GitHub
soobinseo/transformer-tts
pytorch
Mentioned in GitHub
choiHkk/Transformer-TTS
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
text-to-speech-synthesis-on-ljspeechTransformer TTS (Mel + WaveGlow)
Audio Quality MOS: 3.88

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Neural Speech Synthesis with Transformer Network | Papers | HyperAI