HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Amortized Neural Networks for Low-Latency Speech Recognition

Jonathan Macoskey Grant P. Strimel Jinru Su Ariya Rastrow

Amortized Neural Networks for Low-Latency Speech Recognition

Abstract

We introduce Amortized Neural Networks (AmNets), a compute cost- and latency-aware network architecture particularly well-suited for sequence modeling tasks. We apply AmNets to the Recurrent Neural Network Transducer (RNN-T) to reduce compute cost and latency for an automatic speech recognition (ASR) task. The AmNets RNN-T architecture enables the network to dynamically switch between encoder branches on a frame-by-frame basis. Branches are constructed with variable levels of compute cost and model capacity. Here, we achieve variable compute for two well-known candidate techniques: one using sparse pruning and the other using matrix factorization. Frame-by-frame switching is determined by an arbitrator network that requires negligible compute overhead. We present results using both architectures on LibriSpeech data and show that our proposed architecture can reduce inference cost by up to 45\% and latency to nearly real-time without incurring a loss in accuracy.

Benchmarks

BenchmarkMethodologyMetrics
speech-recognition-on-librispeech-test-cleanAmNet
Word Error Rate (WER): 8.6

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Amortized Neural Networks for Low-Latency Speech Recognition | Papers | HyperAI