HyperAI
HyperAI超神经
首页
算力平台
文档
资讯
论文
教程
数据集
百科
SOTA
LLM 模型天梯
GPU 天梯
顶会
开源项目
全站搜索
关于
中文
HyperAI
HyperAI超神经
Toggle sidebar
全站搜索…
⌘
K
全站搜索…
⌘
K
首页
SOTA
文本摘要
Text Summarization On Gigaword
Text Summarization On Gigaword
评估指标
ROUGE-1
ROUGE-2
ROUGE-L
评测结果
各个模型在此基准测试上的表现结果
Columns
模型名称
ROUGE-1
ROUGE-2
ROUGE-L
Paper Title
Repository
OpenAI/o3-mini
60.12
54.22
57.21
-
-
Riple/Saanvi-v0.1
52.21
45.58
60.29
-
-
Pegasus+DotProd
40.6
21.0
37.0
Beyond Reptile: Meta-Learned Dot-Product Maximization between Gradients for Improved Single-Task Regularization
-
BART-RXF
40.45
20.69
36.56
Better Fine-Tuning by Reducing Representational Collapse
MUPPET BART Large
40.4
20.54
36.21
Muppet: Massive Multi-task Representations with Pre-Finetuning
OFA
39.81
20.66
37.11
OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Transformer+Rep(Uni)
39.81
20.40
36.93
Rethinking Perturbations in Encoder-Decoders for Fast Training
Transformer+Wdrop
39.66
20.45
36.59
Rethinking Perturbations in Encoder-Decoders for Fast Training
ProphetNet
39.51
20.42
36.69
ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
ERNIE-GENLARGE (large-scale text corpora)
39.46
20.34
36.74
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
PALM
39.45
20.37
36.75
PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation
Best Summary Length
39.27
20.40
37.75
A New Approach to Overgenerating and Scoring Abstractive Summaries
ERNIE-GENLARGE
39.25
20.25
36.53
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
ControlCopying + BPNorm
39.19
20.38
36.69
Controlling the Amount of Verbatim Copying in Abstractive Summarization
PEGASUS
39.12
19.86
36.24
PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
BiSET
39.11
19.78
36.87
BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization
ControlCopying + SBWR
39.08
20.47
36.69
Controlling the Amount of Verbatim Copying in Abstractive Summarization
UniLM
38.90
20.05
36.00
Unified Language Model Pre-training for Natural Language Understanding and Generation
ERNIE-GENBASE
38.83
20.04
36.20
ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
MASS
38.73
19.71
35.96
MASS: Masked Sequence to Sequence Pre-training for Language Generation
0 of 41 row(s) selected.
Previous
Next
Text Summarization On Gigaword | SOTA | HyperAI超神经