HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Generating Long Sequences with Sparse Transformers

Rewon Child; Scott Gray; Alec Radford; Ilya Sutskever

Generating Long Sequences with Sparse Transformers

Abstract

Transformers are powerful sequence models, but require time and memory that grows quadratically with the sequence length. In this paper we introduce sparse factorizations of the attention matrix which reduce this to $O(n \sqrt{n})$. We also introduce a) a variation on architecture and initialization to train deeper networks, b) the recomputation of attention matrices to save memory, and c) fast attention kernels for training. We call networks with these changes Sparse Transformers, and show they can model sequences tens of thousands of timesteps long using hundreds of layers. We use the same architecture to model images, audio, and text from raw bytes, setting a new state of the art for density modeling of Enwik8, CIFAR-10, and ImageNet-64. We generate unconditional samples that demonstrate global coherence and great diversity, and show it is possible in principle to use self-attention to model sequences of length one million or more.

Code Repositories

mistralai/mistral-src
pytorch
Mentioned in GitHub
ptillet/torch-blocksparse
pytorch
Mentioned in GitHub
wilson1yan/VideoGPT
pytorch
Mentioned in GitHub
openai/sparse_attention
Official
tf
Mentioned in GitHub
han-shi/SparseBERT
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
audio-generation-on-classical-music-5-secondsSparse Transformer 152M (strided)
Bits per byte: 1.97
image-generation-on-imagenet-64x64Sparse Transformer 59M (strided)
Bits per dim: 3.44
language-modelling-on-enwiki8Sparse Transformer (30 layers, fixed attn)
Bit per Character (BPC): 0.99
Number of params: 95M
open-domain-question-answering-on-searchqaSparse Attention
EM: 64.7
question-answering-on-natural-questions-longSparse Attention
F1: 74.5
question-answering-on-quasart-tSparse Attention
EM: 52.1

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp