HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition

Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition

Abstract

Conformer-based models have become the dominant end-to-end architecture for speech processing tasks. With the objective of enhancing the conformer architecture for efficient training and inference, we carefully redesigned Conformer with a novel downsampling schema. The proposed model, named Fast Conformer(FC), is 2.8x faster than the original Conformer, supports scaling to Billion parameters without any changes to the core architecture and also achieves state-of-the-art accuracy on Automatic Speech Recognition benchmarks. To enable transcription of long-form speech up to 11 hours, we replaced global attention with limited context attention post-training, while also improving accuracy through fine-tuning with the addition of a global token. Fast Conformer, when combined with a Transformer decoder also outperforms the original Conformer in accuracy and in speed for Speech Translation and Spoken Language Understanding.

Benchmarks

BenchmarkMethodologyMetrics
speech-recognition-on-common-voice-englishparakeet-rnnt-1.1b
Word Error Rate (WER): 5.8%
speech-recognition-on-librispeech-test-cleanparakeet-rnnt-1.1b
Word Error Rate (WER): 1.46
speech-recognition-on-spgispeechparakeet-rnnt-1.1b
Word Error Rate (WER): 3.11
speech-recognition-on-tedliumparakeet-rnnt-1.1b
Word Error Rate (WER): 3.92

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Fast Conformer with Linearly Scalable Attention for Efficient Speech Recognition | Papers | HyperAI