HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Adaptively Sparse Transformers

Gonçalo M. Correia; Vlad Niculae; André F.T. Martins

Adaptively Sparse Transformers

Abstract

Attention mechanisms have become ubiquitous in NLP. Recent architectures, notably the Transformer, learn powerful context-aware word representations through layered, multi-headed attention. The multiple heads learn diverse types of word relationships. However, with standard softmax attention, all attention heads are dense, assigning a non-zero weight to all context words. In this work, we introduce the adaptively sparse Transformer, wherein attention heads have flexible, context-dependent sparsity patterns. This sparsity is accomplished by replacing softmax with $α$-entmax: a differentiable generalization of softmax that allows low-scoring words to receive precisely zero weight. Moreover, we derive a method to automatically learn the $α$ parameter -- which controls the shape and sparsity of $α$-entmax -- allowing attention heads to choose between focused or spread-out behavior. Our adaptively sparse Transformer improves interpretability and head diversity when compared to softmax Transformers on machine translation datasets. Findings of the quantitative and qualitative analysis of our approach include that heads in different layers learn different sparsity preferences and tend to be more diverse in their attention distributions than softmax Transformers. Furthermore, at no cost in accuracy, sparsity in attention heads helps to uncover different head specializations.

Code Repositories

prajjwal1/adaptive_transformer
pytorch
Mentioned in GitHub
prajjwal1/fluence
pytorch
Mentioned in GitHub
deep-spin/entmax
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
machine-translation-on-iwslt2017-germanAdaptively Sparse Transformer (alpha-entmax)
BLEU score: 29.9
machine-translation-on-iwslt2017-germanAdaptively Sparse Transformer (1.5-entmax)
BLEU score: 29.83
machine-translation-on-wmt2014-english-germanAdaptively Sparse Transformer (alpha-entmax)
BLEU score: 26.93
machine-translation-on-wmt2014-english-germanAdaptively Sparse Transformer (1.5-entmax)
BLEU score: 25.89
machine-translation-on-wmt2016-romanianAdaptively Sparse Transformer (1.5-entmax)
BLEU score: 33.1
machine-translation-on-wmt2016-romanianAdaptively Sparse Transformer (alpha-entmax)
BLEU score: 32.89

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Adaptively Sparse Transformers | Papers | HyperAI