Command Palette
Search for a command to run...
Xingxing Zhang; Mirella Lapata; Furu Wei; Ming Zhou

Abstract
Extractive summarization models require sentence-level labels, which are usually created heuristically (e.g., with rule-based methods) given that most summarization datasets only have document-summary pairs. Since these labels might be suboptimal, we propose a latent variable extractive model where sentences are viewed as latent variables and sentences with activated variables are used to infer gold summaries. During training the loss comes \emph{directly} from gold summaries. Experiments on the CNN/Dailymail dataset show that our model improves over a strong extractive baseline trained on heuristically approximated labels and also performs competitively to several recent models.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| extractive-document-summarization-on-cnn | Latent | ROUGE-1: 41.05 ROUGE-2: 18.77 ROUGE-L: 37.54 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.