Question Answering On Copa

评估指标

Accuracy

评测结果

各个模型在此基准测试上的表现结果

Paper TitleRepository
PaLM 540B (finetuned) 100PaLM: Scaling Language Modeling with Pathways
Vega v2 6B (KD-based prompt transfer)99.4Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE-
ST-MoE-32B 269B (fine-tuned)99.2ST-MoE: Designing Stable and Transferable Sparse Expert Models
UL2 20B (fine-tuned)99UL2: Unifying Language Learning Paradigms
DeBERTa-Ensemble98.4DeBERTa: Decoding-enhanced BERT with Disentangled Attention
Turing NLR v5 XXL 5.4B (fine-tuned)98.2Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE-
DeBERTa-1.5B96.8DeBERTa: Decoding-enhanced BERT with Disentangled Attention
PaLM 2-L (1-shot)96.0PaLM 2 Technical Report
T5-XXL 11B (fine-tuned)94.8Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
FLAN 137B (prompt-tuned)94Finetuned Language Models Are Zero-Shot Learners
T5-XL 3B (fine-tuned)92Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
GPT-3 175B (few-shot, k=32)92Language Models are Few-Shot Learners
FLAN 137B (zero-shot)91Finetuned Language Models Are Zero-Shot Learners
ST-MoE-L 4.1B (fine-tuned)91ST-MoE: Designing Stable and Transferable Sparse Expert Models
GPT-3 175B (0-shot)91Language Models are Few-Shot Learners
T0-3B (CoT fine-tuned)90.9The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
RoBERTa-Winogrande-ft 355M (fine-tuned)90.6WinoGrande: An Adversarial Winograd Schema Challenge at Scale
PaLM 2-M (1-shot)90.0PaLM 2 Technical Report
Flipped-3B89.88Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
PaLM 2-S (1-shot)89.0PaLM 2 Technical Report
0 of 60 row(s) selected.
Question Answering On Copa | SOTA | HyperAI超神经