Command Palette
Search for a command to run...
Ratish Puduppully; Li Dong; Mirella Lapata

Abstract
Recent advances in data-to-text generation have led to the use of large-scale datasets and neural network models which are trained end-to-end, without explicitly modeling what to say and in what order. In this work, we present a neural network architecture which incorporates content selection and planning without sacrificing end-to-end training. We decompose the generation task into two stages. Given a corpus of data records (paired with descriptive documents), we first generate a content plan highlighting which information should be mentioned and in which order and then generate the document while taking the content plan into account. Automatic and human-based evaluation experiments show that our model outperforms strong baselines improving the state-of-the-art on the recently released RotoWire dataset.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| data-to-text-generation-on-rotowire | Neural Content Planning + conditional copy | BLEU: 16.50 |
| data-to-text-generation-on-rotowire-content | Neural Content Planning + conditional copy | BLEU: 16.50 DLD: 18.58% |
| data-to-text-generation-on-rotowire-content-1 | Neural Content Planning + conditional copy | Precision: 34.18% Recall: 51.22% |
| data-to-text-generation-on-rotowire-relation | Neural Content Planning + conditional copy | Precision: 87.47% count: 34.28 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.