Command Palette
Search for a command to run...
Transfer Learning in Biomedical Natural Language Processing: An Evaluation of BERT and ELMo on Ten Benchmarking Datasets
Yifan Peng; Shankai Yan; Zhiyong Lu

Abstract
Inspired by the success of the General Language Understanding Evaluation benchmark, we introduce the Biomedical Language Understanding Evaluation (BLUE) benchmark to facilitate research in the development of pre-training language representations in the biomedicine domain. The benchmark consists of five tasks with ten datasets that cover both biomedical and clinical texts with different dataset sizes and difficulties. We also evaluate several baselines based on BERT and ELMo and find that the BERT model pre-trained on PubMed abstracts and MIMIC-III clinical notes achieves the best results. We make the datasets, pre-trained models, and codes publicly available at https://github.com/ncbi-nlp/BLUE_Benchmark.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| document-classification-on-hoc | NCBI_BERT(large) (P) | F1: 87.3 |
| medical-named-entity-recognition-on-share | NCBI_BERT(base) (P+M) | F1: 0.792 |
| medical-relation-extraction-on-ddi-extraction | NCBI_BERT(large) (P) | F1: 79.9 |
| named-entity-recognition-on-bc5cdr-chemical | NCBI_BERT(base) (P) | F1: 93.5 |
| named-entity-recognition-on-bc5cdr-disease | NCBI_BERT(base) (P) | F1: 86.6 |
| natural-language-inference-on-mednli | NCBI_BERT(base) (P+M) | Accuracy: 84.00 |
| relation-extraction-on-chemprot | NCBI_BERT(large) (P) | F1: 74.4 |
| semantic-similarity-on-biosses | NCBI_BERT(base) (P+M) | Pearson Correlation: 0.9159999999999999 |
| semantic-similarity-on-medsts | NCBI_BERT(base) (P+M) | Pearson Correlation: 0.848 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.