Text Summarization On Gigaword

评估指标

ROUGE-1
ROUGE-2
ROUGE-L

评测结果

各个模型在此基准测试上的表现结果

Paper TitleRepository
OpenAI/o3-mini60.1254.2257.21--
Riple/Saanvi-v0.152.2145.5860.29--
Pegasus+DotProd40.621.037.0Beyond Reptile: Meta-Learned Dot-Product Maximization between Gradients for Improved Single-Task Regularization-
BART-RXF40.4520.6936.56Better Fine-Tuning by Reducing Representational Collapse
MUPPET BART Large40.420.5436.21Muppet: Massive Multi-task Representations with Pre-Finetuning
OFA39.8120.6637.11OFA: Unifying Architectures, Tasks, and Modalities Through a Simple Sequence-to-Sequence Learning Framework
Transformer+Rep(Uni)39.8120.4036.93Rethinking Perturbations in Encoder-Decoders for Fast Training
Transformer+Wdrop39.6620.4536.59Rethinking Perturbations in Encoder-Decoders for Fast Training
ProphetNet39.5120.4236.69ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
ERNIE-GENLARGE (large-scale text corpora)39.4620.3436.74ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
PALM39.4520.3736.75PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation
Best Summary Length39.2720.4037.75A New Approach to Overgenerating and Scoring Abstractive Summaries
ERNIE-GENLARGE39.2520.2536.53ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
ControlCopying + BPNorm39.1920.3836.69Controlling the Amount of Verbatim Copying in Abstractive Summarization
PEGASUS39.1219.8636.24PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
BiSET39.1119.7836.87BiSET: Bi-directional Selective Encoding with Template for Abstractive Summarization
ControlCopying + SBWR39.0820.4736.69Controlling the Amount of Verbatim Copying in Abstractive Summarization
UniLM38.9020.0536.00Unified Language Model Pre-training for Natural Language Understanding and Generation
ERNIE-GENBASE38.8320.0436.20ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
MASS38.7319.7135.96MASS: Masked Sequence to Sequence Pre-training for Language Generation
0 of 41 row(s) selected.
Text Summarization On Gigaword | SOTA | HyperAI超神经