HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Long-Form Video-Language Pre-Training with Multimodal Temporal Contrastive Learning

Yuchong Sun Hongwei Xue Ruihua Song Bei Liu Huan Yang Jianlong Fu

Long-Form Video-Language Pre-Training with Multimodal Temporal Contrastive Learning

Abstract

Large-scale video-language pre-training has shown significant improvement in video-language understanding tasks. Previous studies of video-language pretraining mainly focus on short-form videos (i.e., within 30 seconds) and sentences, leaving long-form video-language pre-training rarely explored. Directly learning representation from long-form videos and language may benefit many long-form video-language understanding tasks. However, it is challenging due to the difficulty of modeling long-range relationships and the heavy computational burden caused by more frames. In this paper, we introduce a Long-Form VIdeo-LAnguage pre-training model (LF-VILA) and train it on a large-scale long-form video and paragraph dataset constructed from an existing public dataset. To effectively capture the rich temporal dynamics and to better align video and language in an efficient end-to-end manner, we introduce two novel designs in our LF-VILA model. We first propose a Multimodal Temporal Contrastive (MTC) loss to learn the temporal relation across different modalities by encouraging fine-grained alignment between long-form videos and paragraphs. Second, we propose a Hierarchical Temporal Window Attention (HTWA) mechanism to effectively capture long-range dependency while reducing computational cost in Transformer. We fine-tune the pre-trained LF-VILA model on seven downstream long-form video-language understanding tasks of paragraph-to-video retrieval and long-form video question-answering, and achieve new state-of-the-art performances. Specifically, our model achieves 16.1% relative improvement on ActivityNet paragraph-to-video retrieval task and 2.4% on How2QA task, respectively. We release our code, dataset, and pre-trained models at https://github.com/microsoft/XPretrain.

Code Repositories

microsoft/xpretrain
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
video-retrieval-on-condensed-moviesLF-VILA
text-to-video R@1: 13.6
text-to-video R@10: 41.8
text-to-video R@5: 32.5
video-retrieval-on-querydLF-VILA
text-to-video R@1: 69.7
text-to-video R@10: 90.3
text-to-video R@5: 85.7

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Long-Form Video-Language Pre-Training with Multimodal Temporal Contrastive Learning | Papers | HyperAI