HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Adapting Pretrained Text-to-Text Models for Long Text Sequences

Wenhan Xiong Anchit Gupta Shubham Toshniwal Yashar Mehdad Wen-tau Yih

Adapting Pretrained Text-to-Text Models for Long Text Sequences

Abstract

We present an empirical study of adapting an existing pretrained text-to-text model for long-sequence inputs. Through a comprehensive study along three axes of the pretraining pipeline -- model architecture, optimization objective, and pretraining corpus, we propose an effective recipe to build long-context models from existing short-context models. Specifically, we replace the full attention in transformers with pooling-augmented blockwise attention, and pretrain the model with a masked-span prediction task with spans of varying length. In terms of the pretraining corpus, we find that using randomly concatenated short-documents from a large open-domain corpus results in better performance than using existing long document corpora which are typically limited in their domain coverage. With these findings, we build a long-context model that achieves competitive performance on long-text QA tasks and establishes the new state of the art on five long-text summarization datasets, often outperforming previous methods with larger model sizes. Our code has been released at https://github.com/facebookresearch/bart_ls.

Code Repositories

facebookresearch/bart_ls
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
long-range-modeling-on-scrollsBART-LS
Avg.: 39.76
CNLI: 87.1
GovRep: 59.4 / 29.8 / 30.8
Nrtv: 26.2
QALT EM-T/H: 37.8 / 34.0
QMSum: 35.1 / 11.0 / 22.0
Qspr: 48.7
SumScr: 37.7 / 10.2 / 21.5
text-summarization-on-arxivBART-LS
ROUGE-1: 50.2
text-summarization-on-booksumBART-LS
ROUGE: 38.5
text-summarization-on-govreportBART-LS
ROUGE-1: 62.0
text-summarization-on-pubmed-1BART-LS
ROUGE-1: 50.3
text-summarization-on-qmsumBART-LS
ROUGE-1: 37.9

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Adapting Pretrained Text-to-Text Models for Long Text Sequences | Papers | HyperAI