HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

PreFLMR: Scaling Up Fine-Grained Late-Interaction Multi-modal Retrievers

Weizhe Lin Jingbiao Mei Jinghong Chen Bill Byrne

PreFLMR: Scaling Up Fine-Grained Late-Interaction Multi-modal Retrievers

Abstract

Large Multimodal Models (LMMs) excel in natural language and visual understanding but are challenged by exacting tasks such as Knowledge-based Visual Question Answering (KB-VQA) which involve the retrieval of relevant information from document collections to use in shaping answers to questions. We present an extensive training and evaluation framework, M2KR, for KB-VQA. M2KR contains a collection of vision and language tasks which we have incorporated into a single suite of benchmark tasks for training and evaluating general-purpose multi-modal retrievers. We use M2KR to develop PreFLMR, a pre-trained version of the recently developed Fine-grained Late-interaction Multi-modal Retriever (FLMR) approach to KB-VQA, and we report new state-of-the-art results across a range of tasks. We also present investigations into the scaling behaviors of PreFLMR intended to be useful in future developments in general-purpose multi-modal retrievers.

Code Repositories

Benchmarks

BenchmarkMethodologyMetrics
retrieval-on-infoseekPreFLMR
Recall@5: 62.1
visual-question-answering-vqa-on-infoseekRA-VQAv2 w/ PreFLMR
Accuracy: 30.65

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
PreFLMR: Scaling Up Fine-Grained Late-Interaction Multi-modal Retrievers | Papers | HyperAI