Data To Text Generation On Webnlg Full 1
评估指标
BLEU
评测结果
各个模型在此基准测试上的表现结果
| Paper Title | Repository | ||
|---|---|---|---|
| Control Prefixes (A1, A2, T5-large) | 62.27 | Control Prefixes for Parameter-Efficient Text Generation | |
| Control Prefixes (A1, T5-large) | 61.94 | Control Prefixes for Parameter-Efficient Text Generation | |
| T5-large + Wiki + Position | 60.56 | Stage-wise Fine-tuning for Graph-to-Text Generation | |
| T5-large | 59.70 | Investigating Pretrained Language Models for Graph-to-Text Generation | |
| T5-Large | 57.1 | Text-to-Text Pre-Training for Data-to-Text Tasks | |
| HTLM (prefix 0.1%) | 56.3 | HTLM: Hyper-Text Pre-Training and Prompting of Language Models | - |
| DATATUNER_NO_FC | 52.9 | Have Your Text and Use It Too! End-to-End Neural Data-to-Text Generation with Semantic Fidelity | |
| Transformer (Pipeline) | 51.68 | Neural data-to-text generation: A comparison between pipeline and end-to-end architectures |
0 of 8 row(s) selected.