摘要
背景:结局(Outcomes)是临床试验中用于评估干预措施对人类健康影响而监测的变量。为实现多项任务,如检测结局切换(即未经合理解释的试验预设结局变更)以及实施核心结局集(Core Outcome Sets,即特定医学领域应报告的最小结局集合),需对试验结局的语义相似性进行自动评估。目的:本研究旨在构建一种用于评估主要结局与报告结局对之间语义相似性的算法。研究重点在于不依赖人工构建的领域特定资源(如本体论、词典等)的方法。方法:我们测试了多种方法,包括基于字符串、词干、词元、本体中路径与距离以及短语向量表示的单一相似性度量方法;结合多个单一度量作为特征的分类器方法;以及一种基于微调预训练深度语言表示的深度学习方法。所采用的语言模型包括:在通用文本上训练的BERT,以及分别在生物医学和科学文本上训练的BioBERT和SciBERT。此外,我们探索了通过考虑结局的表达变体(例如,使用测量工具名称替代结局名称,或使用缩写形式)是否能够提升性能。研究同时发布了一个开放获取的语料库,其中包含结局对语义相似性的标注数据。结果:以单一度量作为特征的分类器方法优于单一度量方法,而采用BioBERT和SciBERT模型的深度学习方法则进一步优于分类器。其中,BioBERT取得了最高的F-measure值,达89.75%。尽管引入结局的表达变体并未提升最优单一度量或分类器的性能,但显著提升了深度学习模型的表现:BioBERT在引入变体后,F-measure提升至93.38%。结论:基于预训练语言表示的深度学习方法在临床试验结局语义相似性评估任务中表现最优,且无需依赖任何人工构建的领域特定资源(如本体论和词汇资源)。引入结局的表达变体进一步提升了深度学习模型的性能,表明考虑表达多样性对提升语义理解能力具有重要意义。
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| sentence-embeddings-for-biomedical-texts-on-3 | BERT-Base uncased (fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, original corpus") | F1: 86.8 Precision: 85.76 Recall: 88.15 |
| sentence-embeddings-for-biomedical-texts-on-3 | BioBERT (pre-trained on PubMed abstracts + PMC, fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, original corpus") | F1: 89.75 Precision: 88.93 Recall: 90.76 |
| sentence-embeddings-for-biomedical-texts-on-3 | SciBERT cased (SciVocab, fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, original corpus") | F1: 89.3 Precision: 87.31 Recall: 91.53 |
| sentence-embeddings-for-biomedical-texts-on-3 | BERT-Base cased (fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, original corpus") | F1: 84.21 Precision: 83.36 Recall: 85.2 |
| sentence-embeddings-for-biomedical-texts-on-3 | SciBERT uncased (SciVocab, fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, original corpus") | F1: 89.3 Precision: 87.99 Recall: 90.78 |
| sentence-embeddings-for-biomedical-texts-on-4 | SciBERT uncased (SciVocab, fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, expanded corpus") | F1: 91.51 Precision: 91.3 Recall: 91.79 |
| sentence-embeddings-for-biomedical-texts-on-4 | BERT-Base uncased (fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, expanded corpus") | F1: 89.16 Precision: 89.31 Recall: 89.12 |
| sentence-embeddings-for-biomedical-texts-on-4 | BioBERT (pre-trained on PubMed abstracts + PMC, fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, expanded corpus") | F1: 93.38 Precision: 92.98 Recall: 93.85 |
| sentence-embeddings-for-biomedical-texts-on-4 | BERT-Base cased (fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, expanded corpus") | F1: 89.12 Precision: 88.25 Recall: 90.1 |
| sentence-embeddings-for-biomedical-texts-on-4 | SciBERT cased (SciVocab, fine-tuned on "Annotated corpus for semantic similarity of clinical trial outcomes, expanded corpus") | F1: 90.69 Precision: 89 Recall: 92.54 |