Question Answering On Boolq

评估指标

Accuracy

评测结果

各个模型在此基准测试上的表现结果

Paper TitleRepository
Mistral-Nemo 12B (HPT)99.87Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models
Gemma-7B99.419Hierarchical Prompting Taxonomy: A Universal Evaluation Framework for Large Language Models
ST-MoE-32B 269B (fine-tuned)92.4ST-MoE: Designing Stable and Transferable Sparse Expert Models
PaLM 540B (fine-tuned)92.2PaLM: Scaling Language Modeling with Pathways
Turing NLR v5 XXL 5.4B (fine-tuned)92Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE-
T5-XXL 11B (fine-tuned)91.2Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
PaLM 2-L (1-shot)90.9PaLM 2 Technical Report
UL2 20B (fine-tuned)90.8UL2: Unifying Language Learning Paradigms
Vega v2 6B (fine-tuned)90.5Toward Efficient Language Model Pretraining and Downstream Adaptation via Self-Evolution: A Case Study on SuperGLUE-
DeBERTa-1.5B90.4DeBERTa: Decoding-enhanced BERT with Disentangled Attention
ST-MoE-L 4.1B (fine-tuned)88.6ST-MoE: Designing Stable and Transferable Sparse Expert Models
PaLM 2-M (1-shot)88.6PaLM 2 Technical Report
PaLM 2-S (1-shot)88.1PaLM 2 Technical Report
MUPPET Roberta Large87.5Muppet: Massive Multi-task Representations with Pre-Finetuning
FLAN 137B (prompt-tuned)86.3Finetuned Language Models Are Zero-Shot Learners
RoBERTa-large 355M + Entailment as Few-shot Learner86.0Entailment as Few-Shot Learner
T5-Large 770M (fine-tuned)85.4Exploring the Limits of Transfer Learning with a Unified Text-to-Text Transformer
LLaMA 65B (0-shot)85.3LLaMA: Open and Efficient Foundation Language Models
LLaMA 2 70B (0-shot)85Llama 2: Open Foundation and Fine-Tuned Chat Models
FLAN 137B (4-shot)84.6Finetuned Language Models Are Zero-Shot Learners
0 of 66 row(s) selected.
Question Answering On Boolq | SOTA | HyperAI超神经