HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Augmenting Self-attention with Persistent Memory

Sainbayar Sukhbaatar; Edouard Grave; Guillaume Lample; Herve Jegou; Armand Joulin

Augmenting Self-attention with Persistent Memory

Abstract

Transformer networks have lead to important progress in language modeling and machine translation. These models include two consecutive modules, a feed-forward layer and a self-attention layer. The latter allows the network to capture long term dependencies and are often regarded as the key ingredient in the success of Transformers. Building upon this intuition, we propose a new model that solely consists of attention layers. More precisely, we augment the self-attention layers with persistent memory vectors that play a similar role as the feed-forward layer. Thanks to these vectors, we can remove the feed-forward layer without degrading the performance of a transformer. Our evaluation shows the benefits brought by our model on standard character and word level language modeling benchmarks.

Code Repositories

facebookresearch/adaptive-span
pytorch
Mentioned in GitHub
lucidrains/x-transformers
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
language-modelling-on-enwiki8All-attention network (36 layers)
Number of params: 114M
language-modelling-on-enwiki8All-attention network (18 layers)
Bit per Character (BPC): 1.01
Number of params: 39M
language-modelling-on-text8All-attention network - 36 layers
Bit per Character (BPC): 1.08
Number of params: 114M
language-modelling-on-text8All-attention network - 18 layers
Bit per Character (BPC): 1.11
Number of params: 38M
language-modelling-on-wikitext-103All-attention network (36 layers)
Number of params: 133M
Test perplexity: 20.6
Validation perplexity: 19.7

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Augmenting Self-attention with Persistent Memory | Papers | HyperAI