HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Transformer Quality in Linear Time

Weizhe Hua Zihang Dai Hanxiao Liu Quoc V. Le

Transformer Quality in Linear Time

Abstract

We revisit the design choices in Transformers, and propose methods to address their weaknesses in handling long sequences. First, we propose a simple layer named gated attention unit, which allows the use of a weaker single-head attention with minimal quality loss. We then propose a linear approximation method complementary to this new layer, which is accelerator-friendly and highly competitive in quality. The resulting model, named FLASH, matches the perplexity of improved Transformers over both short (512) and long (8K) context lengths, achieving training speedups of up to 4.9$\times$ on Wiki-40B and 12.1$\times$ on PG-19 for auto-regressive language modeling, and 4.8$\times$ on C4 for masked language modeling.

Code Repositories

lucidrains/FLASH-pytorch
pytorch
Mentioned in GitHub
zhuiyitechnology/gau-alpha
tf
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
language-modelling-on-wiki-40bFLASH-Quad-8k
Perplexity: 14.998

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Transformer Quality in Linear Time | Papers | HyperAI