| Scrambled code + broken (alter) | - | 48.18 | 19.84 | 45.35 | Universal Evasion Attacks on Summarization Scoring | |
| MatchSum (RoBERTa-base) | - | 44.41 | 20.86 | 40.55 | Extractive Summarization as Text Matching | |
| MatchSum (BERT-base) | - | 44.22 | 20.62 | 40.38 | Extractive Summarization as Text Matching | |
| BertSumExt | - | 43.85 | 20.34 | 39.9 | Text Summarization with Pretrained Encoders | |
| BigBird-Pegasus | - | 43.84 | 21.11 | 40.74 | Big Bird: Transformers for Longer Sequences | |
| BERTSUM+Transformer | - | 43.25 | 20.24 | 39.63 | Fine-tune BERT for Extractive Summarization | |
| UniLM (Abstractive Summarization) | - | 43.08 | 20.43 | 40.34 | Unified Language Model Pre-training for Natural Language Understanding and Generation | |
| Selector+Pointer Generator | - | 41.72 | 18.74 | 38.79 | Mixture Content Selection for Diverse Sequence Generation | |
| Bottom-Up Sum | 32.75 | 41.22 | 18.68 | 38.34 | Bottom-Up Abstractive Summarization | |
| TaLK Convolutions (Deep) | - | 40.59 | 18.97 | 36.81 | Time-aware Large Kernel Convolutions | |
| TaLK Convolutions (Standard) | - | 40.03 | 18.45 | 36.13 | Time-aware Large Kernel Convolutions | |
| ML + RL (Paulus et al., 2017) | - | 39.87 | 15.82 | 36.90 | A Deep Reinforced Model for Abstractive Summarization | |