HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Adapting Neural Link Predictors for Data-Efficient Complex Query Answering

Erik Arakelyan; Pasquale Minervini; Daniel Daza; Michael Cochez; Isabelle Augenstein

Adapting Neural Link Predictors for Data-Efficient Complex Query Answering

Abstract

Answering complex queries on incomplete knowledge graphs is a challenging task where a model needs to answer complex logical queries in the presence of missing knowledge. Prior work in the literature has proposed to address this problem by designing architectures trained end-to-end for the complex query answering task with a reasoning process that is hard to interpret while requiring data and resource-intensive training. Other lines of research have proposed re-using simple neural link predictors to answer complex queries, reducing the amount of training data by orders of magnitude while providing interpretable answers. The neural link predictor used in such approaches is not explicitly optimised for the complex query answering task, implying that its scores are not calibrated to interact together. We propose to address these problems via CQD$^{\mathcal{A}}$, a parameter-efficient score \emph{adaptation} model optimised to re-calibrate neural link prediction scores for the complex query answering task. While the neural link predictor is frozen, the adaptation component -- which only increases the number of model parameters by $0.03\%$ -- is trained on the downstream complex query answering task. Furthermore, the calibration component enables us to support reasoning over queries that include atomic negations, which was previously impossible with link predictors. In our experiments, CQD$^{\mathcal{A}}$ produces significantly more accurate results than current state-of-the-art methods, improving from $34.4$ to $35.1$ Mean Reciprocal Rank values averaged across all datasets and query types while using $\leq 30\%$ of the available training query types. We further show that CQD$^{\mathcal{A}}$ is data-efficient, achieving competitive results with only $1\%$ of the training complex queries, and robust in out-of-domain evaluations.

Benchmarks

BenchmarkMethodologyMetrics
complex-query-answering-on-fb15kCQDA
MRR 1p: 0.892
MRR 2i: 0.761
MRR 2p: 0.645
MRR 2u: 0.684
MRR 3i: 0.794
MRR 3p: 0.579
MRR ip: 0.706
MRR pi: 0.701
MRR up: 0.579
complex-query-answering-on-fb15k-237CQDA
MRR 1p: 0.467
MRR 2i: 0.345
MRR 2p: 0.136
MRR 2u: 0.176
MRR 3i: 0.483
MRR 3p: 0.114
MRR ip: 0.209
MRR pi: 0.274
MRR up: 0.114
complex-query-answering-on-nell-995CQDA
MRR 1p: 0.604
MRR 2i: 0.434
MRR 2p: 0.229
MRR 2u: 0.200
MRR 3i: 0.526
MRR 3p: 0.167
MRR ip: 0.264
MRR pi: 0.321
MRR up: 0.170

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Adapting Neural Link Predictors for Data-Efficient Complex Query Answering | Papers | HyperAI