HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

From Neural Re-Ranking to Neural Ranking: Learning a Sparse Representation for Inverted Indexing

{Erik Learned-Miller W. Bruce Croft Mostafa Dehghani Hamed Zamani and Jaap Kamps}

From Neural Re-Ranking to Neural Ranking: Learning a Sparse Representation for Inverted Indexing

Abstract

The availability of massive data and computing power allowing for effective data driven neural approaches is having a major impacton machine learning and information retrieval research, but these models have a basic problem with efficiency. Current neural ranking models are implemented as multistage rankers: for efficiency reasons, the neural model only re-ranks the top ranked documents retrieved by a first-stage efficient ranker in response to a given query. Neural ranking models learn dense representations causing essentially every query term to match every document term, making it highly inefficient or intractable to rank the whole collection. The reliance on a first stage ranker creates a dual problem: First, the interaction and combination effects are not well understood. Second, the first stage ranker serves as a “gate-keeper” or filter, effectively blocking the potential of neural models to uncover new relevant documents.In this work, we propose a standalone neural ranking model (SNRM) by introducing a sparsity property to learn a latent sparse representation for each query and document. This representation captures the semantic relationship between the query and documents, but is also sparse enough to enable constructing an inverted index for the whole collection. We parameterize the sparsity of the model to yield a retrieval model as efficient as conventional term based models. Our model gains in efficiency without loss of effectiveness: it not only outperforms the existing term matching baselines, but also performs similarly to the recent re-ranking based neural models with dense representations. Our model can also take advantage of pseudo-relevance feedback for further improvements. More generally, our results demonstrate the importance of sparsity in neuralIR models and show that dense representations can be pruned effectively, giving new insights about essential semantic features and their distributions.

Benchmarks

BenchmarkMethodologyMetrics
ad-hoc-information-retrieval-on-trec-robust04SNRM-PRF
MAP: 0.2971
P@20: 0.3948
nDCG@20: 0.4391
ad-hoc-information-retrieval-on-trec-robust04SNRM
MAP: 0.2856
P@20: 0.3766
nDCG@20: 0.4310
ad-hoc-information-retrieval-on-trec-robust04QL
MAP: 0.2499

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
From Neural Re-Ranking to Neural Ranking: Learning a Sparse Representation for Inverted Indexing | Papers | HyperAI