Multiple Choice Question Answering Mcqa On 11
评估指标
Accuracy
评测结果
各个模型在此基准测试上的表现结果
| Paper Title | Repository | ||
|---|---|---|---|
| Med-PaLM 2 (ER) | 95.8 | Towards Expert-Level Medical Question Answering with Large Language Models | |
| Med-PaLM 2 (CoT + SC) | 95.1 | Towards Expert-Level Medical Question Answering with Large Language Models | |
| Med-PaLM 2 (5-shot) | 94.4 | Towards Expert-Level Medical Question Answering with Large Language Models | |
| Chinchilla (few-shot, k=5) | 79.9 | Galactica: A Large Language Model for Science | |
| Gopher (few-shot, k=5) | 70.8 | Galactica: A Large Language Model for Science | |
| GAL 120B (zero-shot) | 68.8 | Galactica: A Large Language Model for Science | |
| OPT (few-shot, k=5) | 30.6 | Galactica: A Large Language Model for Science | |
| BLOOM (few-shot, k=5) | 28.5 | Galactica: A Large Language Model for Science |
0 of 8 row(s) selected.