Command Palette
Search for a command to run...
Wenhan Xiong Anchit Gupta Shubham Toshniwal Yashar Mehdad Wen-tau Yih

Abstract
We present an empirical study of adapting an existing pretrained text-to-text model for long-sequence inputs. Through a comprehensive study along three axes of the pretraining pipeline -- model architecture, optimization objective, and pretraining corpus, we propose an effective recipe to build long-context models from existing short-context models. Specifically, we replace the full attention in transformers with pooling-augmented blockwise attention, and pretrain the model with a masked-span prediction task with spans of varying length. In terms of the pretraining corpus, we find that using randomly concatenated short-documents from a large open-domain corpus results in better performance than using existing long document corpora which are typically limited in their domain coverage. With these findings, we build a long-context model that achieves competitive performance on long-text QA tasks and establishes the new state of the art on five long-text summarization datasets, often outperforming previous methods with larger model sizes. Our code has been released at https://github.com/facebookresearch/bart_ls.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| long-range-modeling-on-scrolls | BART-LS | Avg.: 39.76 CNLI: 87.1 GovRep: 59.4 / 29.8 / 30.8 Nrtv: 26.2 QALT EM-T/H: 37.8 / 34.0 QMSum: 35.1 / 11.0 / 22.0 Qspr: 48.7 SumScr: 37.7 / 10.2 / 21.5 |
| text-summarization-on-arxiv | BART-LS | ROUGE-1: 50.2 |
| text-summarization-on-booksum | BART-LS | ROUGE: 38.5 |
| text-summarization-on-govreport | BART-LS | ROUGE-1: 62.0 |
| text-summarization-on-pubmed-1 | BART-LS | ROUGE-1: 50.3 |
| text-summarization-on-qmsum | BART-LS | ROUGE-1: 37.9 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.