Abstractive Text Summarization On Cnn Daily

评估指标

ROUGE-1
ROUGE-2
ROUGE-L

评测结果

各个模型在此基准测试上的表现结果

Paper TitleRepository
Scrambled code + broken (alter)48.1819.8445.35Universal Evasion Attacks on Summarization Scoring
BRIO47.7823.5544.57BRIO: Bringing Order to Abstractive Summarization
Pegasus47.3624.0244.45Calibrating Sequence likelihood Improves Conditional Language Generation-
PEGASUS + SummaReranker47.1622.6143.87SummaReranker: A Multi-Task Mixture-of-Experts Re-ranking Framework for Abstractive Summarization
Scrambled code + broken46.7120.3943.56Universal Evasion Attacks on Summarization Scoring
BART + SimCLS46.6722.1543.54SimCLS: A Simple Framework for Contrastive Learning of Abstractive Summarization
SEASON46.2722.6443.08Salience Allocation as Guidance for Abstractive Summarization
Fourier Transformer44.7621.5541.34Fourier Transformer: Fast Long Range Modeling by Removing Sequence Redundancy with FFT Operator
GLM-XXLarge44.721.441.4GLM: General Language Model Pretraining with Autoregressive Blank Infilling
BART + R-Drop44.5121.5841.24R-Drop: Regularized Dropout for Neural Networks
CoCoNet + CoCoPretrain44.5021.5541.24Learn to Copy from the Copying History: Correlational Copy Network for Abstractive Summarization-
MUPPET BART Large44.4521.2541.4Muppet: Massive Multi-task Representations with Pre-Finetuning
CoCoNet44.3921.4141.05Learn to Copy from the Copying History: Correlational Copy Network for Abstractive Summarization-
BART+R3F44.3821.5341.17Better Fine-Tuning by Reducing Representational Collapse
ERNIE-GENLARGE (large-scale text corpora)44.3121.3541.60ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
PALM44.3021.1241.41PALM: Pre-training an Autoencoding&Autoregressive Language Model for Context-conditioned Generation
ProphetNet44.2021.1741.30ProphetNet: Predicting Future N-gram for Sequence-to-Sequence Pre-training
PEGASUS44.1721.4741.11PEGASUS: Pre-training with Extracted Gap-sentences for Abstractive Summarization
BART44.1621.2840.90BART: Denoising Sequence-to-Sequence Pre-training for Natural Language Generation, Translation, and Comprehension
ERNIE-GENLARGE44.0221.1741.26ERNIE-GEN: An Enhanced Multi-Flow Pre-training and Fine-tuning Framework for Natural Language Generation
0 of 53 row(s) selected.
Abstractive Text Summarization On Cnn Daily | SOTA | HyperAI超神经