HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

X-Pool: Cross-Modal Language-Video Attention for Text-Video Retrieval

Satya Krishna Gorti Noel Vouitsis Junwei Ma Keyvan Golestan Maksims Volkovs Animesh Garg Guangwei Yu

X-Pool: Cross-Modal Language-Video Attention for Text-Video Retrieval

Abstract

In text-video retrieval, the objective is to learn a cross-modal similarity function between a text and a video that ranks relevant text-video pairs higher than irrelevant pairs. However, videos inherently express a much wider gamut of information than texts. Instead, texts often capture sub-regions of entire videos and are most semantically similar to certain frames within videos. Therefore, for a given text, a retrieval model should focus on the text's most semantically similar video sub-regions to make a more relevant comparison. Yet, most existing works aggregate entire videos without directly considering text. Common text-agnostic aggregations schemes include mean-pooling or self-attention over the frames, but these are likely to encode misleading visual information not described in the given text. To address this, we propose a cross-modal attention model called X-Pool that reasons between a text and the frames of a video. Our core mechanism is a scaled dot product attention for a text to attend to its most semantically similar frames. We then generate an aggregated video representation conditioned on the text's attention weights over the frames. We evaluate our method on three benchmark datasets of MSR-VTT, MSVD and LSMDC, achieving new state-of-the-art results by up to 12% in relative improvement in Recall@1. Our findings thereby highlight the importance of joint text-video reasoning to extract important visual cues according to text. Full code and demo can be found at: https://layer6ai-labs.github.io/xpool/

Code Repositories

layer6ai-labs/xpool
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
video-retrieval-on-lsmdcX-Pool
text-to-video Mean Rank: 53.2
text-to-video Median Rank: 8.0
text-to-video R@1: 25.2
text-to-video R@10: 53.5
text-to-video R@5: 43.7
video-to-text Mean Rank: 47.4
video-to-text Median Rank: 10.0
video-to-text R@1: 22.7
video-to-text R@10: 51.2
video-to-text R@5: 42.6
video-retrieval-on-msr-vtt-1kaX-Pool
text-to-video Mean Rank: 14.3
text-to-video Median Rank: 2
text-to-video R@1: 46.9
text-to-video R@10: 82.2
text-to-video R@5: 72.8
video-to-text Mean Rank: 9.0
video-to-text Median Rank: 2.0
video-to-text R@1: 44.4
video-to-text R@10: 84.0
video-to-text R@5: 73.3
video-retrieval-on-msvdX-Pool
text-to-video Mean Rank: 9.3
text-to-video Median Rank: 2.0
text-to-video R@1: 47.2
text-to-video R@10: 86.0
text-to-video R@5: 77.4
video-to-text Mean Rank: 3.3
video-to-text Median Rank: 1.0
video-to-text R@1: 66.4
video-to-text R@10: 94.2
video-to-text R@5: 90.0

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
X-Pool: Cross-Modal Language-Video Attention for Text-Video Retrieval | Papers | HyperAI