Text Simplification On Asset
评估指标
BLEU
SARI (EASSEu003e=0.2.1)
评测结果
各个模型在此基准测试上的表现结果
| Paper Title | Repository | |||
|---|---|---|---|---|
| GPT-175B (15 SARI-selected examples, random ordering) | 73.92 | 47.94 | Metric-Based In-context Learning: A Case Study in Text Simplification | |
| MUSS (BART+ACCESS Supervised) | 72.98 | 44.15 | MUSS: Multilingual Unsupervised Sentence Simplification by Mining Paraphrases | |
| Control Prefixes (BART) | - | 43.58 | Control Prefixes for Parameter-Efficient Text Generation | |
| TST | - | 43.21 | Text Simplification by Tagging | |
| MUSS (BART+ACCESS Unsupervised) | - | 42.65 | MUSS: Multilingual Unsupervised Sentence Simplification by Mining Paraphrases | |
| ACCESS | 75.99* | 40.13 | Controllable Sentence Simplification | |
| DMASS-DCSS | 71.44* | 38.67 | Integrating Transformer and Paraphrase Rules for Sentence Simplification | |
| Dress-LS | 86.39* | 36.59 | Sentence Simplification with Deep Reinforcement Learning | |
| UNTS (Unsupervised) | 76.14* | 35.19 | Unsupervised Neural Text Simplification | |
| PBMT-R | 79.39* | 34.63 | - | - |
| BART | - | - | The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics | - |
| T5 | - | - | The GEM Benchmark: Natural Language Generation, its Evaluation and Metrics | - |
0 of 12 row(s) selected.