HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles

Md Shamim Hussain Mohammed J. Zaki Dharmashankar Subramanian

The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles

Abstract

Transformers use the dense self-attention mechanism which gives a lot of flexibility for long-range connectivity. Over multiple layers of a deep transformer, the number of possible connectivity patterns increases exponentially. However, very few of these contribute to the performance of the network, and even fewer are essential. We hypothesize that there are sparsely connected sub-networks within a transformer, called information pathways which can be trained independently. However, the dynamic (i.e., input-dependent) nature of these pathways makes it difficult to prune dense self-attention during training. But the overall distribution of these pathways is often predictable. We take advantage of this fact to propose Stochastically Subsampled self-Attention (SSA) - a general-purpose training strategy for transformers that can reduce both the memory and computational cost of self-attention by 4 to 8 times during training while also serving as a regularization method - improving generalization over dense training. We show that an ensemble of sub-models can be formed from the subsampled pathways within a network, which can achieve better performance than its densely attended counterpart. We perform experiments on a variety of NLP, computer vision and graph learning tasks in both generative and discriminative settings to provide empirical evidence for our claims and show the effectiveness of the proposed method.

Code Repositories

shamim-hussain/ssa
Official
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
graph-regression-on-pcqm4mv2-lscEGT+SSA+Self-ensemble
Validation MAE: 0.0865
graph-regression-on-pcqm4mv2-lscEGT+SSA
Validation MAE: 0.0876
image-classification-on-imagenetSwin-T+SSA
Top 1 Accuracy: 81.89%
language-modelling-on-enwiki8Transformer+SSA
Bit per Character (BPC): 1.024
language-modelling-on-wikitext-103Transformer+SSA+Self-ensemble
Test perplexity: 17.18
Validation perplexity: 16.54
language-modelling-on-wikitext-103Transformer+SSA
Test perplexity: 17.60
Validation perplexity: 16.91

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
The Information Pathways Hypothesis: Transformers are Dynamic Self-Ensembles | Papers | HyperAI