HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

VindLU: A Recipe for Effective Video-and-Language Pretraining

Feng Cheng Xizi Wang Jie Lei David Crandall Mohit Bansal Gedas Bertasius

VindLU: A Recipe for Effective Video-and-Language Pretraining

Abstract

The last several years have witnessed remarkable progress in video-and-language (VidL) understanding. However, most modern VidL approaches use complex and specialized model architectures and sophisticated pretraining protocols, making the reproducibility, analysis and comparisons of these frameworks difficult. Hence, instead of proposing yet another new VidL model, this paper conducts a thorough empirical study demystifying the most important factors in the VidL model design. Among the factors that we investigate are (i) the spatiotemporal architecture design, (ii) the multimodal fusion schemes, (iii) the pretraining objectives, (iv) the choice of pretraining data, (v) pretraining and finetuning protocols, and (vi) dataset and model scaling. Our empirical study reveals that the most important design factors include: temporal modeling, video-to-text multimodal fusion, masked modeling objectives, and joint training on images and videos. Using these empirical insights, we then develop a step-by-step recipe, dubbed VindLU, for effective VidL pretraining. Our final model trained using our recipe achieves comparable or better than state-of-the-art results on several VidL tasks without relying on external CLIP pretraining. In particular, on the text-to-video retrieval task, our approach obtains 61.2% on DiDeMo, and 55.0% on ActivityNet, outperforming current SOTA by 7.8% and 6.1% respectively. Furthermore, our model also obtains state-of-the-art video question-answering results on ActivityNet-QA, MSRVTT-QA, MSRVTT-MC and TVQA. Our code and pretrained models are publicly available at: https://github.com/klauscc/VindLU.

Code Repositories

klauscc/vindlu
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
video-question-answering-on-activitynet-qaVindLU
Accuracy: 44.7
video-question-answering-on-msrvtt-mcVindLU
Accuracy: 95.5
video-question-answering-on-msrvtt-qaVindLU
Accuracy: 44.6
video-question-answering-on-tvqaVindLU
Accuracy: 79.0
video-retrieval-on-activitynetVindLU
text-to-video R@1: 55.0
text-to-video R@10: 89.7
text-to-video R@5: 81.4
video-retrieval-on-condensed-moviesVINDLU
text-to-video R@1: 18.4
text-to-video R@10: 44.3
text-to-video R@5: 36.4
video-retrieval-on-didemoVindLU
text-to-video R@1: 61.2
text-to-video R@10: 91.0
text-to-video R@5: 85.8
video-retrieval-on-msr-vtt-1kaVindLU
text-to-video R@1: 46.5
text-to-video R@10: 80.4
text-to-video R@5: 71.5
video-retrieval-on-querydVINDLU
text-to-video R@1: 67.8
text-to-video R@10: 81.8
text-to-video R@5: 86.3
video-retrieval-on-ssv2-label-retrievalVindLU
text-to-video R@1: 53.1
text-to-video R@5: 81.8
video-retrieval-on-ssv2-template-retrievalVindLU
text-to-video R@1: 83.3
text-to-video R@10: 100
text-to-video R@5: 100

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
VindLU: A Recipe for Effective Video-and-Language Pretraining | Papers | HyperAI