Multiple Choice Question Answering Mcqa On 25
评估指标
Accuracy
评测结果
各个模型在此基准测试上的表现结果
| Paper Title | Repository | ||
|---|---|---|---|
| Med-PaLM 2 (5-shot) | 95.2 | Towards Expert-Level Medical Question Answering with Large Language Models | |
| Med-PaLM 2 (CoT + SC) | 93.4 | Towards Expert-Level Medical Question Answering with Large Language Models | |
| Med-PaLM 2 (ER) | 92.3 | Towards Expert-Level Medical Question Answering with Large Language Models | |
| BioMedGPT-LM-7B | 51.1 | BioMedGPT: Open Multimodal Generative Pre-trained Transformer for BioMedicine | |
| Llama2-7B | 43.38 | Llama 2: Open Foundation and Fine-Tuned Chat Models | |
| Llama2-7B-chat | 40.07 | Llama 2: Open Foundation and Fine-Tuned Chat Models |
0 of 6 row(s) selected.