HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Towards Expert-Level Medical Question Answering with Large Language Models

Towards Expert-Level Medical Question Answering with Large Language Models

Abstract

Recent artificial intelligence (AI) systems have reached milestones in "grand challenges" ranging from Go to protein-folding. The capability to retrieve medical knowledge, reason over it, and answer medical questions comparably to physicians has long been viewed as one such grand challenge. Large language models (LLMs) have catalyzed significant progress in medical question answering; Med-PaLM was the first model to exceed a "passing" score in US Medical Licensing Examination (USMLE) style questions with a score of 67.2% on the MedQA dataset. However, this and other prior work suggested significant room for improvement, especially when models' answers were compared to clinicians' answers. Here we present Med-PaLM 2, which bridges these gaps by leveraging a combination of base LLM improvements (PaLM 2), medical domain finetuning, and prompting strategies including a novel ensemble refinement approach. Med-PaLM 2 scored up to 86.5% on the MedQA dataset, improving upon Med-PaLM by over 19% and setting a new state-of-the-art. We also observed performance approaching or exceeding state-of-the-art across MedMCQA, PubMedQA, and MMLU clinical topics datasets. We performed detailed human evaluations on long-form questions along multiple axes relevant to clinical applications. In pairwise comparative ranking of 1066 consumer medical questions, physicians preferred Med-PaLM 2 answers to those produced by physicians on eight of nine axes pertaining to clinical utility (p < 0.001). We also observed significant improvements compared to Med-PaLM on every evaluation axis (p < 0.001) on newly introduced datasets of 240 long-form "adversarial" questions to probe LLM limitations. While further studies are necessary to validate the efficacy of these models in real-world settings, these results highlight rapid progress towards physician-level performance in medical question answering.

Code Repositories

m42-health/med42
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
multiple-choice-question-answering-mcqa-on-11Med-PaLM 2 (ER)
Accuracy: 95.8
multiple-choice-question-answering-mcqa-on-11Med-PaLM 2 (5-shot)
Accuracy: 94.4
multiple-choice-question-answering-mcqa-on-11Med-PaLM 2 (CoT + SC)
Accuracy: 95.1
multiple-choice-question-answering-mcqa-on-21Med-PaLM 2 (ER)
Test Set (Acc-%): 0.723
multiple-choice-question-answering-mcqa-on-21Med-PaLM 2 (CoT+SC)
Test Set (Acc-%): 0.715
multiple-choice-question-answering-mcqa-on-21Med-PaLM 2 (5-shot)
Test Set (Acc-%): 0.713
multiple-choice-question-answering-mcqa-on-23Med-PaLM 2 (ER)
Accuracy: 88.7
multiple-choice-question-answering-mcqa-on-23Med-PaLM 2 (CoT + SC)
Accuracy: 88.3
multiple-choice-question-answering-mcqa-on-23Med-PaLM 2 (5-shot)
Accuracy: 88.3
multiple-choice-question-answering-mcqa-on-24Med-PaLM 2 (CoT + SC)
Accuracy: 80.0
multiple-choice-question-answering-mcqa-on-24Med-PaLM 2 (ER)
Accuracy: 84.4
multiple-choice-question-answering-mcqa-on-24Med-PaLM 2 (5-shot)
Accuracy: 77.8
multiple-choice-question-answering-mcqa-on-25Med-PaLM 2 (ER)
Accuracy: 92.3
multiple-choice-question-answering-mcqa-on-25Med-PaLM 2 (5-shot)
Accuracy: 95.2
multiple-choice-question-answering-mcqa-on-25Med-PaLM 2 (CoT + SC)
Accuracy: 93.4
multiple-choice-question-answering-mcqa-on-26Med-PaLM (CoT + SC)
Accuracy: 81.5
multiple-choice-question-answering-mcqa-on-26Med-PaLM 2 (5-shot)
Accuracy: 80.9
multiple-choice-question-answering-mcqa-on-26Med-PaLM (ER)
Accuracy: 83.2
multiple-choice-question-answering-mcqa-on-8Med-PaLM 2 (ER)
Accuracy: 92
multiple-choice-question-answering-mcqa-on-8Med-PaLM 2 (CoT + SC)
Accuracy: 89
multiple-choice-question-answering-mcqa-on-8Med-PaLM 2 (5-shot)
Accuracy: 90
question-answering-on-medqa-usmleMed-PaLM 2 (5-shot)
Accuracy: 79.7
question-answering-on-medqa-usmleMed-PaLM 2 (CoT + SC)
Accuracy: 83.7
question-answering-on-medqa-usmleMed-PaLM 2
Accuracy: 85.4
question-answering-on-pubmedqaMed-PaLM 2 (CoT + SC)
Accuracy: 74.0
question-answering-on-pubmedqaMed-PaLM 2 (ER)
Accuracy: 75.0
question-answering-on-pubmedqaMed-PaLM 2 (5-shot)
Accuracy: 79.2

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Towards Expert-Level Medical Question Answering with Large Language Models | Papers | HyperAI