Command Palette
Search for a command to run...
{Jan-Philipp Sachs Harry Freitas da Cruz Ariane Morassi Sasso Suparno Datta Michel Oleynik Erwin Bottinger Benjamin Bergner Erik Faessler Arpita Kappattanavar}

Abstract
The TREC-PM challenge aims for advances in the field of information retrieval applied to precision medicine. Here we describe our experimental setup and the achieved results in its 2018 edition. We explored the use of unsupervised topic models, supervised document classification, and rule-based query-time search term boosting and expansion. We participated in the biomedical articles and clinical trials subtasks and were among the three highest-scoring teams. Our results showed that query expansion associated with hand-crafted rules contribute to better values of information retrieval metrics. However, the use of a precision medicine classifier did not show the expected improvement for the biomedical abstracts subtask. In the future, we plan to add different terminologies to replace hand-crafted rules and experiment with negation detection.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| information-retrieval-on-trec-pm | hpipubcommon | infNDCG: 0.5605 |
| information-retrieval-on-trec-pm | hpictall | infNDCG: 0.5545 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.