Coreference Resolution On Winograd Schema

评估指标

Accuracy

评测结果

各个模型在此基准测试上的表现结果

Paper TitleRepository
PaLM 540B (fine-tuned)100PaLM: Scaling Language Modeling with Pathways
Vega v2 6B (KD-based prompt transfer)98.6Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE-
UL2 20B (fine-tuned)98.1UL2: Unifying Language Learning Paradigms
Turing NLR v5 XXL 5.4B (fine-tuned)97.3Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE-
ST-MoE-32B 269B (fine-tuned)96.6ST-MoE: Designing Stable and Transferable Sparse Expert Models
DeBERTa-1.5B95.9DeBERTa: Decoding-enhanced BERT with Disentangled Attention
T5-XXL 11B (fine-tuned)93.8Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
ST-MoE-L 4.1B (fine-tuned)93.3ST-MoE: Designing Stable and Transferable Sparse Expert Models
RoBERTa-WinoGrande 355M90.1WinoGrande: An Adversarial Winograd Schema Challenge at Scale
Flan-T5 XXL (zero -shot)89.82Scaling Instruction-Finetuned Language Models
PaLM 540B (5-shot)89.5PaLM: Scaling Language Modeling with Pathways
PaLM 540B (0-shot)89.1PaLM: Scaling Language Modeling with Pathways
PaLM 2-M (1-shot)88.1PaLM 2 Technical Report
PaLM 2-L (1-shot)86.9PaLM 2 Technical Report
FLAN 137B (prompt-tuned)86.5Finetuned Language Models Are Zero-Shot Learners
PaLM 540B (1-shot)86.3PaLM: Scaling Language Modeling with Pathways
PaLM 2-S (1-shot)84.6PaLM 2 Technical Report
TTTTT 3B (fine-tuned)84.6TTTTTackling WinoGrande Schemas-
RoBERTa-DPR 355M83.1WinoGrande: An Adversarial Winograd Schema Challenge at Scale
FLAN 137B (zero-shot)80.8Finetuned Language Models Are Zero-Shot Learners
0 of 82 row(s) selected.
Coreference Resolution On Winograd Schema | SOTA | HyperAI超神经