HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

BioBERT: a pre-trained biomedical language representation model for biomedical text mining

Jinhyuk Lee; Wonjin Yoon; Sungdong Kim; Donghyeon Kim; Sunkyu Kim; Chan Ho So; Jaewoo Kang

BioBERT: a pre-trained biomedical language representation model for biomedical text mining

Abstract

Biomedical text mining is becoming increasingly important as the number of biomedical documents rapidly grows. With the progress in natural language processing (NLP), extracting valuable information from biomedical literature has gained popularity among researchers, and deep learning has boosted the development of effective biomedical text mining models. However, directly applying the advancements in NLP to biomedical text mining often yields unsatisfactory results due to a word distribution shift from general domain corpora to biomedical corpora. In this article, we investigate how the recently introduced pre-trained language model BERT can be adapted for biomedical corpora. We introduce BioBERT (Bidirectional Encoder Representations from Transformers for Biomedical Text Mining), which is a domain-specific language representation model pre-trained on large-scale biomedical corpora. With almost the same architecture across tasks, BioBERT largely outperforms BERT and previous state-of-the-art models in a variety of biomedical text mining tasks when pre-trained on biomedical corpora. While BERT obtains performance comparable to that of previous state-of-the-art models, BioBERT significantly outperforms them on the following three representative biomedical text mining tasks: biomedical named entity recognition (0.62% F1 score improvement), biomedical relation extraction (2.80% F1 score improvement) and biomedical question answering (12.24% MRR improvement). Our analysis results show that pre-training BERT on biomedical corpora helps it to understand complex biomedical texts. We make the pre-trained weights of BioBERT freely available at https://github.com/naver/biobert-pretrained, and the source code for fine-tuning BioBERT available at https://github.com/dmis-lab/biobert.

Code Repositories

jpablou/Matching-The-Blanks-Ths
pytorch
Mentioned in GitHub
naver/biobert-pretrained
Official
Mentioned in GitHub
phucdev/TL_Bio_RE
tf
Mentioned in GitHub
MeRajat/SolvingAlmostAnythingWithBert
pytorch
Mentioned in GitHub
cypressd1999/FYP_2021
pytorch
Mentioned in GitHub
rahul-1996/KGraphs-QA
pytorch
Mentioned in GitHub
re-search/DocProduct
tf
Mentioned in GitHub
plkmo/BERT-Relation-Extraction
pytorch
Mentioned in GitHub
kuldeep7688/BioMedicalBertNer
pytorch
Mentioned in GitHub
ManasRMohanty/DS5500-capstone
pytorch
Mentioned in GitHub
ardakdemir/my_bert_ner
tf
Mentioned in GitHub
EmilyAlsentzer/clinicalBERT
tf
Mentioned in GitHub
charles9n/bert-sklearn
pytorch
Mentioned in GitHub
dmis-lab/biobert
Official
tf
Mentioned in GitHub
mocherson/aki_bert
pytorch
Mentioned in GitHub
hieudepchai/BERT_IE
pytorch
Mentioned in GitHub
dmis-lab/bern
tf
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
drug-drug-interaction-extraction-on-ddiBioBERT
F1: 0.8088
Micro F1: 80.88
few-shot-learning-on-medconceptsqadmis-lab/biobert-v1.1
Accuracy: 25.458
named-entity-recognition-ner-on-jnlpbaBioBERT
F1: 77.59
named-entity-recognition-ner-on-ncbi-diseaseBioBERT
F1: 89.71
named-entity-recognition-on-species-800BioBERT
F1: 75.31
question-answering-on-medqa-usmleBioBERT (large)
Accuracy: 36.7
question-answering-on-medqa-usmleBioBERT (base)
Accuracy: 34.1
relation-extraction-on-chemprotBioBERT
F1: 76.46
representation-learning-on-scidocsBioBERT
Avg.: 58.8
zero-shot-learning-on-medconceptsqadmis-lab/biobert-v1.1
Accuracy: 26.151

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
BioBERT: a pre-trained biomedical language representation model for biomedical text mining | Papers | HyperAI