HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Self-Chained Image-Language Model for Video Localization and Question Answering

Shoubin Yu; Jaemin Cho; Prateek Yadav; Mohit Bansal

Self-Chained Image-Language Model for Video Localization and Question Answering

Abstract

Recent studies have shown promising results on utilizing large pre-trained image-language models for video question answering. While these image-language models can efficiently bootstrap the representation learning of video-language models, they typically concatenate uniformly sampled video frames as visual inputs without explicit language-aware, temporal modeling. When only a portion of a video input is relevant to the language query, such uniform frame sampling can often lead to missing important visual cues. Although humans often find a video moment to focus on and rewind the moment to answer questions, training a query-aware video moment localizer often requires expensive annotations and high computational costs. To address this issue, we propose Self-Chained Video Localization-Answering (SeViLA), a novel framework that leverages a single image-language model (BLIP-2) to tackle both temporal keyframe localization and QA on videos. SeViLA framework consists of two modules: Localizer and Answerer, where both are parameter-efficiently fine-tuned from BLIP-2. We propose two ways of chaining these modules for cascaded inference and self-refinement. First, in the forward chain, the Localizer finds multiple language-aware keyframes in a video, which the Answerer uses to predict the answer. Second, in the reverse chain, the Answerer generates keyframe pseudo-labels to refine the Localizer, alleviating the need for expensive video moment localization annotations. Our SeViLA framework outperforms several strong baselines on 5 challenging video QA and event prediction benchmarks, and achieves the state-of-the-art in both fine-tuning (NExT-QA, STAR) and zero-shot (NExT-QA, STAR, How2QA, VLEP) settings. We also analyze the impact of Localizer, comparisons of Localizer with other temporal localization models, pre-training/self-refinement of Localizer, and varying the number of keyframes.

Code Repositories

yui010206/sevila
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
video-question-answering-on-next-qaSeViLA
Accuracy: 73.8
video-question-answering-on-next-qa-efficientSeViLA (4 frames)
1:1 Accuracy: 73.8
video-question-answering-on-situatedSeViLA
Average Accuracy: 64.9
video-question-answering-on-situatedSeViLA (0-shot)
Average Accuracy: 44.6
zero-shot-video-question-answer-on-egoschemaSeViLA (4B)
Accuracy: 25.7
zero-shot-video-question-answer-on-egoschema-1SeViLA (4B)
Accuracy: 22.7
zero-shot-video-question-answer-on-intentqaSeViLA (4B)
Accuracy: 60.9
zero-shot-video-question-answer-on-next-qaSevila (4B)
Accuracy: 63.6
zero-shot-video-question-answer-on-tvqaSEVILA (no speech)
Accuracy: 38.2

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp