Command Palette
Search for a command to run...
Case-based Reasoning for Natural Language Queries over Knowledge Bases
Rajarshi Das Manzil Zaheer Dung Thai Ameya Godbole Ethan Perez Jay-Yoon Lee Lizhen Tan Lazaros Polymenakos Andrew McCallum

Abstract
It is often challenging to solve a complex problem from scratch, but much easier if we can access other similar problems with their solutions -- a paradigm known as case-based reasoning (CBR). We propose a neuro-symbolic CBR approach (CBR-KBQA) for question answering over large knowledge bases. CBR-KBQA consists of a nonparametric memory that stores cases (question and logical forms) and a parametric model that can generate a logical form for a new question by retrieving cases that are relevant to it. On several KBQA datasets that contain complex questions, CBR-KBQA achieves competitive performance. For example, on the ComplexWebQuestions dataset, CBR-KBQA outperforms the current state of the art by 11\% on accuracy. Furthermore, we show that CBR-KBQA is capable of using new cases \emph{without} any further training: by incorporating a few human-labeled examples in the case memory, CBR-KBQA is able to successfully generate logical forms containing unseen KB entities as well as relations.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| knowledge-base-question-answering-on | QGG | Accuracy: 44.1 |
| knowledge-base-question-answering-on | PullNet | Accuracy: 45.9 |
| knowledge-base-question-answering-on | CBR-KBQA | Accuracy: 70.4 |
| semantic-parsing-on-webquestionssp | CBR-KBQA | Accuracy: 70 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.