HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Self-training and Pre-training are Complementary for Speech Recognition

Qiantong Xu Alexei Baevski Tatiana Likhomanenko Paden Tomasello Alexis Conneau Ronan Collobert Gabriel Synnaeve Michael Auli

Self-training and Pre-training are Complementary for Speech Recognition

Abstract

Self-training and unsupervised pre-training have emerged as effective approaches to improve speech recognition systems using unlabeled data. However, it is not clear whether they learn similar patterns or if they can be effectively combined. In this paper, we show that pseudo-labeling and pre-training with wav2vec 2.0 are complementary in a variety of labeled data setups. Using just 10 minutes of labeled data from Libri-light as well as 53k hours of unlabeled data from LibriVox achieves WERs of 3.0%/5.2% on the clean and other test sets of Librispeech - rivaling the best published systems trained on 960 hours of labeled data only a year ago. Training on all labeled data of Librispeech achieves WERs of 1.5%/3.1%.

Code Repositories

Benchmarks

BenchmarkMethodologyMetrics
speech-recognition-on-librispeech-test-cleanwav2vec_wav2letter
Word Error Rate (WER): 2.7
speech-recognition-on-librispeech-test-cleanConv + Transformer + wav2vec2.0 + pseudo labeling
Word Error Rate (WER): 1.5
speech-recognition-on-librispeech-test-otherConv + Transformer + wav2vec2.0 + pseudo labeling
Word Error Rate (WER): 3.1
speech-recognition-on-librispeech-train-cleanwav2vec_wav2letter
Word Error Rate (WER): 2.8
speech-recognition-on-librispeech-train-clean-1wav2vec_wav2letter
Word Error Rate (WER): 3.6

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Self-training and Pre-training are Complementary for Speech Recognition | Papers | HyperAI