Command Palette
Search for a command to run...
Wei-Ning Hsu Ann Lee Gabriel Synnaeve Awni Hannun

Abstract
For sequence transduction tasks like speech recognition, a strong structured prior model encodes rich information about the target space, implicitly ruling out invalid sequences by assigning them low probability. In this work, we propose local prior matching (LPM), a semi-supervised objective that distills knowledge from a strong prior (e.g. a language model) to provide learning signal to a discriminative model trained on unlabeled speech. We demonstrate that LPM is theoretically well-motivated, simple to implement, and superior to existing knowledge distillation techniques under comparable settings. Starting from a baseline trained on 100 hours of labeled speech, with an additional 360 hours of unlabeled data, LPM recovers 54% and 73% of the word error rate on clean and noisy test sets relative to a fully supervised model on the same data.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| speech-recognition-on-librispeech-test-clean | Local Prior Matching (Large Model) | Word Error Rate (WER): 7.19 |
| speech-recognition-on-librispeech-test-other | Local Prior Matching (Large Model, ConvLM LM) | Word Error Rate (WER): 15.28 |
| speech-recognition-on-librispeech-test-other | Local Prior Matching (Large Model) | Word Error Rate (WER): 20.84 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.