Command Palette
Search for a command to run...
CR-CTC: Consistency regularization on CTC for improved speech recognition
Zengwei Yao Wei Kang Xiaoyu Yang Fangjun Kuang Liyong Guo Han Zhu Zengrui Jin Zhaoqing Li Long Lin Daniel Povey

Abstract
Connectionist Temporal Classification (CTC) is a widely used method for automatic speech recognition (ASR), renowned for its simplicity and computational efficiency. However, it often falls short in recognition performance. In this work, we propose the Consistency-Regularized CTC (CR-CTC), which enforces consistency between two CTC distributions obtained from different augmented views of the input speech mel-spectrogram. We provide in-depth insights into its essential behaviors from three perspectives: 1) it conducts self-distillation between random pairs of sub-models that process different augmented views; 2) it learns contextual representation through masked prediction for positions within time-masked regions, especially when we increase the amount of time masking; 3) it suppresses the extremely peaky CTC distributions, thereby reducing overfitting and improving the generalization ability. Extensive experiments on LibriSpeech, Aishell-1, and GigaSpeech datasets demonstrate the effectiveness of our CR-CTC. It significantly improves the CTC performance, achieving state-of-the-art results comparable to those attained by transducer or systems combining CTC and attention-based encoder-decoder (CTC/AED). We release our code at https://github.com/k2-fsa/icefall.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| speech-recognition-on-aishell-1 | Zipformer+CR-CTC (no external language model) | Params(M): 66.2 Word Error Rate (WER): 4.02 |
| speech-recognition-on-gigaspeech-dev | Zipformer+pruned transducer w/ CR-CTC (no external language model) | Word Error Rate (WER): 9.95 |
| speech-recognition-on-gigaspeech-dev | Zipformer+CR-CTC (no external language model) | Word Error Rate (WER): 10.15 |
| speech-recognition-on-gigaspeech-dev | Zipformer+pruned transducer (no external language model) | Word Error Rate (WER): 10.09 |
| speech-recognition-on-gigaspeech-test | Zipformer+CR-CTC (no external language model) | Word Error Rate (WER): 10.28 |
| speech-recognition-on-gigaspeech-test | Zipformer+CR-CTC/AED (no external language model) | Word Error Rate (WER): 10.07 |
| speech-recognition-on-gigaspeech-test | Zipformer+pruned transducer w/ CR-CTC (no external language model) | Word Error Rate (WER): 10.03 |
| speech-recognition-on-gigaspeech-test | Zipformer+pruned transducer (no external language model) | Word Error Rate (WER): 10.2 |
| speech-recognition-on-librispeech-test-clean | Zipformer+CR-CTC (no external language model) | Word Error Rate (WER): 2.02 |
| speech-recognition-on-librispeech-test-clean | Zipformer+pruned transducer w/ CR-CTC (no external language model) | Word Error Rate (WER): 1.88 |
| speech-recognition-on-librispeech-test-other | Zipformer+pruned transducer w/ CR-CTC (no external language model) | Word Error Rate (WER): 3.95 |
| speech-recognition-on-librispeech-test-other | Zipformer+CR-CTC (no external language model) | Word Error Rate (WER): 4.35 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.