HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video

mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video

Abstract

Recent years have witnessed a big convergence of language, vision, and multi-modal pretraining. In this work, we present mPLUG-2, a new unified paradigm with modularized design for multi-modal pretraining, which can benefit from modality collaboration while addressing the problem of modality entanglement. In contrast to predominant paradigms of solely relying on sequence-to-sequence generation or encoder-based instance discrimination, mPLUG-2 introduces a multi-module composition network by sharing common universal modules for modality collaboration and disentangling different modality modules to deal with modality entanglement. It is flexible to select different modules for different understanding and generation tasks across all modalities including text, image, and video. Empirical study shows that mPLUG-2 achieves state-of-the-art or competitive results on a broad range of over 30 downstream tasks, spanning multi-modal tasks of image-text and video-text understanding and generation, and uni-modal tasks of text-only, image-only, and video-only understanding. Notably, mPLUG-2 shows new state-of-the-art results of 48.0 top-1 accuracy and 80.3 CIDEr on the challenging MSRVTT video QA and video caption tasks with a far smaller model size and data scale. It also demonstrates strong zero-shot transferability on vision-language and video-language tasks. Code and models will be released in https://github.com/alibaba/AliceMind.

Code Repositories

X-PLUG/mPLUG-2
pytorch
Mentioned in GitHub
alibaba/AliceMind
Official
pytorch
x-plug/mplug-owl
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
action-classification-on-kinetics-400mPLUG-2
Acc@1: 87.1
Acc@5: 97.7
action-classification-on-kinetics-600mPLUG-2
Top-1 Accuracy: 89.8
Top-5 Accuracy: 98.3
action-classification-on-kinetics-700mPLUG-2
Top-1 Accuracy: 80.4
Top-5 Accuracy: 94.9
video-captioning-on-msr-vtt-1mPLUG-2
BLEU-4: 57.8
CIDEr: 80.0
METEOR: 34.9
ROUGE-L: 70.1
video-captioning-on-msvd-1mPLUG-2
BLEU-4: 70.5
CIDEr: 165.8
METEOR: 48.4
ROUGE-L: 85.3
video-question-answering-on-msrvtt-qamPLUG-2
Accuracy: 48.0
video-retrieval-on-didemomPLUG-2
text-to-video R@1: 56.4
text-to-video R@10: 85.2
text-to-video R@5: 79.1
video-retrieval-on-lsmdcmPLUG-2
text-to-video R@1: 34.4
text-to-video R@10: 65.1
text-to-video R@5: 55.2
video-retrieval-on-msr-vtt-1kamPLUG-2
text-to-video R@1: 53.1
text-to-video R@10: 84.7
text-to-video R@5: 77.6
visual-grounding-on-refcoco-test-bmPLUG-2
Accuracy (%): 86.05
visual-grounding-on-refcoco-testamPLUG-2
Accuracy (%): 92.8
visual-grounding-on-refcoco-valmPLUG-2
Accuracy (%): 90.33
visual-question-answering-on-msrvtt-qa-1mPLUG-2
Accuracy: 0.480
visual-question-answering-on-msvd-qa-1mPLUG-2
Accuracy: 0.581
visual-question-answering-on-vqa-v2-test-dev-1mPLUG-2
Accuracy: 81.11
zero-shot-video-retrieval-on-didemomPLUG-2
text-to-video R@1: 45.7
text-to-video R@10: 79.2
text-to-video R@5: 71.1
zero-shot-video-retrieval-on-lsmdcmPLUG-2
text-to-video R@1: 24.1
text-to-video R@10: 52.0
text-to-video R@5: 43.8
zero-shot-video-retrieval-on-msr-vttmPLUG-2
text-to-video R@1: 47.1
text-to-video R@10: 79.0
text-to-video R@5: 69.7

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video | Papers | HyperAI