HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment

Bin Zhu; Bin Lin; Munan Ning; Yang Yan; Jiaxi Cui; HongFa Wang; Yatian Pang; Wenhao Jiang; Junwu Zhang; Zongwei Li; Wancai Zhang; Zhifeng Li; Wei Liu; Li Yuan

LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment

Abstract

The video-language (VL) pretraining has achieved remarkable improvement in multiple downstream tasks. However, the current VL pretraining framework is hard to extend to multiple modalities (N modalities, N>=3) beyond vision and language. We thus propose LanguageBind, taking the language as the bind across different modalities because the language modality is well-explored and contains rich semantics. Specifically, we freeze the language encoder acquired by VL pretraining, then train encoders for other modalities with contrastive learning. As a result, all modalities are mapped to a shared feature space, implementing multi-modal semantic alignment. While LanguageBind ensures that we can extend VL modalities to N modalities, we also need a high-quality dataset with alignment data pairs centered on language. We thus propose VIDAL-10M with Video, Infrared, Depth, Audio and their corresponding Language, naming as VIDAL-10M. In our VIDAL-10M, all videos are from short video platforms with complete semantics rather than truncated segments from long videos, and all the video, depth, infrared, and audio modalities are aligned to their textual descriptions. LanguageBind has achieved superior performance on a wide range of 15 benchmarks covering video, audio, depth, and infrared. Moreover, multiple experiments have provided evidence for the effectiveness of LanguageBind in achieving indirect alignment and complementarity among diverse modalities. Code address: https://github.com/PKU-YuanGroup/LanguageBind

Code Repositories

pku-yuangroup/video-bench
Mentioned in GitHub
PKU-YuanGroup/MoE-LLaVA
pytorch
Mentioned in GitHub
zhihaozhang97/ru-ai
pytorch
Mentioned in GitHub
pku-yuangroup/languagebind
Official
pytorch
Mentioned in GitHub
PKU-YuanGroup/Video-LLaVA
pytorch
Mentioned in GitHub
PKU-YuanGroup/LLMBind
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
temporal-relation-extraction-on-vinogroundLanguageBind
Group Score: 1.2
Text Score: 10.6
Video Score: 5
zero-shot-action-recognition-on-kineticsLanguageBind
Top-1 Accuracy: 64.1
Top-5 Accuracy: 85.7
zero-shot-video-retrieval-on-activitynetLanguageBind(ViT-L/14)
text-to-video R@1: 38.4
text-to-video R@10: 77.9
text-to-video R@5: 66.6
video-to-text R@1: 35.7
video-to-text R@10: 77.8
video-to-text R@5: 65.8
zero-shot-video-retrieval-on-activitynetLanguageBind(ViT-H/14)
text-to-video R@1: 41.0
text-to-video R@10: 80.0
text-to-video R@5: 68.4
video-to-text R@1: 39.1
video-to-text R@10: 81.1
video-to-text R@5: 69.8
zero-shot-video-retrieval-on-didemoLanguageBind(ViT-H/14)
text-to-video Median Rank: 2
text-to-video R@1: 39.9
text-to-video R@10: 74.6
text-to-video R@5: 66.1
video-to-text R@1: 39.8
video-to-text R@10: 76.2
video-to-text R@5: 67.8
zero-shot-video-retrieval-on-didemoLanguageBind(ViT-L/14)
text-to-video Median Rank: 2.0
text-to-video R@1: 39.7
text-to-video R@10: 73.8
text-to-video R@5: 65.5
video-to-text R@1: 38.4
video-to-text R@10: 77.9
video-to-text R@5: 66.6
zero-shot-video-retrieval-on-msr-vttLanguageBind(ViT-L/14)
text-to-video Median Rank: 2.0
text-to-video R@1: 42.8
text-to-video R@10: 76.0
text-to-video R@5: 67.5
video-to-text Median Rank: 3.0
video-to-text R@1: 38.3
video-to-text R@10: 77.8
video-to-text R@5: 65.8
zero-shot-video-retrieval-on-msr-vttLanguageBind(ViT-H/14)
text-to-video Median Rank: 2
text-to-video R@1: 44.8
text-to-video R@10: 78.7
text-to-video R@5: 70.0
video-to-text Median Rank: 2.
video-to-text R@1: 40.9
video-to-text R@10: 75.7
video-to-text R@5: 66.4
zero-shot-video-retrieval-on-msvdLanguageBind(ViT-H/14)
text-to-video Median Rank: 1
text-to-video R@1: 53.9
text-to-video R@10: 87.8
text-to-video R@5: 80.4
video-to-text Median Rank: 1
video-to-text R@1: 72.0
video-to-text R@10: 96.3
video-to-text R@5: 91.4
zero-shot-video-retrieval-on-msvdLanguageBind(ViT-L/14)
text-to-video Median Rank: 1.0
text-to-video R@1: 54.1
text-to-video R@10: 88.1
text-to-video R@5: 81.1
video-to-text Median Rank: 1.0
video-to-text R@1: 69.7
video-to-text R@10: 97.9
video-to-text R@5: 91.8

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
LanguageBind: Extending Video-Language Pretraining to N-modality by Language-based Semantic Alignment | Papers | HyperAI