HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Investigating Efficiently Extending Transformers for Long Input Summarization

Jason Phang Yao Zhao Peter J. Liu

Investigating Efficiently Extending Transformers for Long Input Summarization

Abstract

While large pretrained Transformer models have proven highly capable at tackling natural language tasks, handling long sequence inputs continues to be a significant challenge. One such task is long input summarization, where inputs are longer than the maximum input context of most pretrained models. Through an extensive set of experiments, we investigate what model architectural changes and pretraining paradigms can most efficiently adapt a pretrained Transformer for long input summarization. We find that a staggered, block-local Transformer with global encoder tokens strikes a good balance of performance and efficiency, and that an additional pretraining phase on long sequences meaningfully improves downstream summarization performance. Based on our findings, we introduce PEGASUS-X, an extension of the PEGASUS model with additional long input pretraining to handle inputs of up to 16K tokens. PEGASUS-X achieves strong performance on long input summarization tasks comparable with much larger models while adding few additional parameters and not requiring model parallelism to train.

Benchmarks

BenchmarkMethodologyMetrics
long-range-modeling-on-scrollsPEGASUS-X-Base
GovRep: 59.3 / 29.3 / 30.9
QMSum: 32.9 / 9.8 / 21.4
SumScr: 35.0 / 8.9 / 20.4
long-range-modeling-on-scrollsPEGASUS-X
GovRep: 60.3 / 30.0 / 31.5
QMSum: 33.2 / 9.6 / 21.6
SumScr: 35.7 / 9.1 / 20.6
text-summarization-on-arxivPegasus-X
ROUGE-1: 50.0
ROUGE-2: 21.8
ROUGE-L: 44.6

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Investigating Efficiently Extending Transformers for Long Input Summarization | Papers | HyperAI