Command Palette
Search for a command to run...
Summary Level Training of Sentence Rewriting for Abstractive Summarization
Sanghwan Bae Taeuk Kim Jihoon Kim Sang-goo Lee

Abstract
As an attempt to combine extractive and abstractive summarization, Sentence Rewriting models adopt the strategy of extracting salient sentences from a document first and then paraphrasing the selected ones to generate a summary. However, the existing models in this framework mostly rely on sentence-level rewards or suboptimal labels, causing a mismatch between a training objective and evaluation metric. In this paper, we present a novel training signal that directly maximizes summary-level ROUGE scores through reinforcement learning. In addition, we incorporate BERT into our model, making good use of its ability on natural language understanding. In extensive experiments, we show that a combination of our proposed model and training procedure obtains new state-of-the-art performance on both CNN/Daily Mail and New York Times datasets. We also demonstrate that it generalizes better on DUC-2002 test set.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| abstractive-text-summarization-on-cnn-daily | BERT-ext + abs + RL + rerank | ROUGE-1: 41.90 ROUGE-2: 19.08 ROUGE-L: 39.64 |
| extractive-document-summarization-on-cnn | BERT-ext + RL | ROUGE-1: 42.76 ROUGE-2: 19.87 ROUGE-L: 39.11 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.