Command Palette
Search for a command to run...
Yang Liu; Mirella Lapata

Abstract
Bidirectional Encoder Representations from Transformers (BERT) represents the latest incarnation of pretrained language models which have recently advanced a wide range of natural language processing tasks. In this paper, we showcase how BERT can be usefully applied in text summarization and propose a general framework for both extractive and abstractive models. We introduce a novel document-level encoder based on BERT which is able to express the semantics of a document and obtain representations for its sentences. Our extractive model is built on top of this encoder by stacking several inter-sentence Transformer layers. For abstractive summarization, we propose a new fine-tuning schedule which adopts different optimizers for the encoder and the decoder as a means of alleviating the mismatch between the two (the former is pretrained while the latter is not). We also demonstrate that a two-staged fine-tuning approach can further boost the quality of the generated summaries. Experiments on three datasets show that our model achieves state-of-the-art results across the board in both extractive and abstractive settings. Our code is available at https://github.com/nlpyang/PreSumm
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| abstractive-text-summarization-on-cnn-daily | BertSumExtAbs | ROUGE-1: 42.13 ROUGE-2: 19.6 ROUGE-L: 39.18 |
| document-summarization-on-cnn-daily-mail | BertSumExt | ROUGE-1: 43.85 ROUGE-2: 20.34 ROUGE-L: 39.9 |
| text-summarization-on-x-sum | BertSumExtAbs | ROUGE-1: 38.81 ROUGE-2: 16.50 ROUGE-3: 31.27 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.