HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

CR-CTC: Consistency regularization on CTC for improved speech recognition

Zengwei Yao Wei Kang Xiaoyu Yang Fangjun Kuang Liyong Guo Han Zhu Zengrui Jin Zhaoqing Li Long Lin Daniel Povey

CR-CTC: Consistency regularization on CTC for improved speech recognition

Abstract

Connectionist Temporal Classification (CTC) is a widely used method for automatic speech recognition (ASR), renowned for its simplicity and computational efficiency. However, it often falls short in recognition performance. In this work, we propose the Consistency-Regularized CTC (CR-CTC), which enforces consistency between two CTC distributions obtained from different augmented views of the input speech mel-spectrogram. We provide in-depth insights into its essential behaviors from three perspectives: 1) it conducts self-distillation between random pairs of sub-models that process different augmented views; 2) it learns contextual representation through masked prediction for positions within time-masked regions, especially when we increase the amount of time masking; 3) it suppresses the extremely peaky CTC distributions, thereby reducing overfitting and improving the generalization ability. Extensive experiments on LibriSpeech, Aishell-1, and GigaSpeech datasets demonstrate the effectiveness of our CR-CTC. It significantly improves the CTC performance, achieving state-of-the-art results comparable to those attained by transducer or systems combining CTC and attention-based encoder-decoder (CTC/AED). We release our code at https://github.com/k2-fsa/icefall.

Code Repositories

k2-fsa/icefall
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
speech-recognition-on-aishell-1Zipformer+CR-CTC (no external language model)
Params(M): 66.2
Word Error Rate (WER): 4.02
speech-recognition-on-gigaspeech-devZipformer+pruned transducer w/ CR-CTC (no external language model)
Word Error Rate (WER): 9.95
speech-recognition-on-gigaspeech-devZipformer+CR-CTC (no external language model)
Word Error Rate (WER): 10.15
speech-recognition-on-gigaspeech-devZipformer+pruned transducer (no external language model)
Word Error Rate (WER): 10.09
speech-recognition-on-gigaspeech-testZipformer+CR-CTC (no external language model)
Word Error Rate (WER): 10.28
speech-recognition-on-gigaspeech-testZipformer+CR-CTC/AED (no external language model)
Word Error Rate (WER): 10.07
speech-recognition-on-gigaspeech-testZipformer+pruned transducer w/ CR-CTC (no external language model)
Word Error Rate (WER): 10.03
speech-recognition-on-gigaspeech-testZipformer+pruned transducer (no external language model)
Word Error Rate (WER): 10.2
speech-recognition-on-librispeech-test-cleanZipformer+CR-CTC (no external language model)
Word Error Rate (WER): 2.02
speech-recognition-on-librispeech-test-cleanZipformer+pruned transducer w/ CR-CTC (no external language model)
Word Error Rate (WER): 1.88
speech-recognition-on-librispeech-test-otherZipformer+pruned transducer w/ CR-CTC (no external language model)
Word Error Rate (WER): 3.95
speech-recognition-on-librispeech-test-otherZipformer+CR-CTC (no external language model)
Word Error Rate (WER): 4.35

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
CR-CTC: Consistency regularization on CTC for improved speech recognition | Papers | HyperAI