Logical Reasoning On Big Bench Penguins In A
评估指标
Accuracy
评测结果
各个模型在此基准测试上的表现结果
| Paper Title | Repository | ||
|---|---|---|---|
| PaLM 2 (few-shot, k=3, CoT) | 84.9 | PaLM 2 Technical Report | |
| PaLM 2 (few-shot, k=3, Direct) | 65.8 | PaLM 2 Technical Report | |
| Chinchilla-70B (few-shot, k=5) | 48.7 | Training Compute-Optimal Large Language Models | |
| PaLM 540B (few-shot, k=3) | 44.5 | BloombergGPT: A Large Language Model for Finance | |
| Gopher-280B (few-shot, k=5) | 40.6 | Scaling Language Models: Methods, Analysis & Insights from Training Gopher | |
| BLOOM 176B (few-shot, k=3) | 40.41 | BloombergGPT: A Large Language Model for Finance | |
| Bloomberg GPT (few-shot, k=3) | 37.67 | BloombergGPT: A Large Language Model for Finance | |
| GPT-NeoX (few-shot, k=3) | 33.56 | BloombergGPT: A Large Language Model for Finance | |
| OPT 66B (few-shot, k=3) | 28.08 | BloombergGPT: A Large Language Model for Finance |
0 of 9 row(s) selected.