Natural Language Inference On Mednli
评估指标
Accuracy
评测结果
各个模型在此基准测试上的表现结果
| Paper Title | Repository | ||
|---|---|---|---|
| ClinicalMosaic | 86.59 | Patient Trajectory Prediction: Integrating Clinical Notes with Transformers | |
| SciFive-large | 86.57 | SciFive: a text-to-text transformer model for biomedical literature | |
| BioELECTRA-Base | 86.34 | BioELECTRA:Pretrained Biomedical text Encoder using Discriminators | - |
| CharacterBERT (base, medical) | 84.95 | CharacterBERT: Reconciling ELMo and BERT for Word-Level Open-Vocabulary Representations From Characters | |
| NCBI_BERT(base) (P+M) | 84.00 | Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets | |
| BiomedGPT-B | 83.83 | BiomedGPT: A Generalist Vision-Language Foundation Model for Diverse Biomedical Tasks | |
| BioBERT-MIMIC | 83.45 | Saama Research at MEDIQA 2019: Pre-trained BioBERT with Attention Visualisation for Medical Natural Language Inference | - |
0 of 7 row(s) selected.