HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

MMTF: Multi-Modal Temporal Fusion for Commonsense Video Question Answering

{Sanguk Park Dongchan Park Geonwoo Park Mobeen Ahmad}

MMTF: Multi-Modal Temporal Fusion for Commonsense Video Question Answering

Abstract

Video question answering is a challenging task that requires understanding the video and question in the same context. This becomes even harder when the questions involve reasoning, such as predicting future events or explaining counterfactual events, because they need knowledge not explicitly shown. Existing methods use coarse-grained fusion of video and language features, ignoring temporal information. To address this, we propose a novel vision-text fusion module that learns the temporal context of the video and question. Our module expands question tokens along the video's temporal axis and fuses them with video features to generate new representations with local and global context. We evaluated our method on four VideoQA datasets, including MSVD-QA, NExT-QA, Causal-VidQA, and AGQA-2.0.

Benchmarks

BenchmarkMethodologyMetrics
video-question-answering-on-agqa-2-0-balancedMMTF
Average Accuracy: 44.36

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
MMTF: Multi-Modal Temporal Fusion for Commonsense Video Question Answering | Papers | HyperAI