HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding

Hang Zhang; Xin Li; Lidong Bing

Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding

Abstract

We present Video-LLaMA a multi-modal framework that empowers Large Language Models (LLMs) with the capability of understanding both visual and auditory content in the video. Video-LLaMA bootstraps cross-modal training from the frozen pre-trained visual and audio encoders and the frozen LLMs. Unlike previous works that complement LLMs to process the visual or audio signals only, Video-LLaMA enables video comprehension by tackling two challenges: (1) capturing the temporal changes in visual scenes, (2) integrating audio-visual signals. To counter the first challenge, we propose a Video Q-former to assemble a pre-trained image encoder into our video encoder and introduce a video-to-text generation task to learn video-language correspondence. For the second challenge, we leverage ImageBind, a universal embedding model aligning multiple modalities, as the pre-trained audio encoder and introduce an Audio Q-former on top of ImageBind to learn reasonable auditory query embeddings for the LLM module. To align the output of both visual and audio encoders with LLM's embedding space, we first train Video-LLaMA on massive video/image-caption pairs and then tune our model with visual-instruction datasets of moderate amount but higher quality. We found Video-LLaMA shows the ability to perceive and comprehend video content and generate meaningful responses grounded in the visual and auditory information presented in the videos.

Code Repositories

damo-nlp-sg/videollama2
pytorch
Mentioned in GitHub
damo-nlp-sg/video-llama
Official
pytorch
Mentioned in GitHub
damo-nlp-sg/videollama3
pytorch
Mentioned in GitHub
xinding-sys/StreamMind
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
video-based-generative-performanceVideo LLaMA
Consistency: 1.79
Contextual Understanding: 2.16
Correctness of Information: 1.96
Detail Orientation: 2.18
Temporal Understanding: 1.82
mean: 1.98
video-based-generative-performance-1Video LLaMA
gpt-score: 1.96
video-based-generative-performance-2Video LLaMA
gpt-score: 1.79
video-based-generative-performance-3Video LLaMA
gpt-score: 2.16
video-based-generative-performance-4Video LLaMA
gpt-score: 2.18
video-based-generative-performance-5Video LLaMA
gpt-score: 1.82
video-question-answering-on-mvbenchVideoLLaMA
Avg.: 34.1
video-text-retrieval-on-test-of-timeVideo-LLAMA
2-Class Accuracy: 88.33
zeroshot-video-question-answer-on-activitynetVideo LLaMA
Accuracy: 12.4
Confidence Score: 1.1
zeroshot-video-question-answer-on-msrvtt-qaVideo LLaMA-7B
Accuracy: 29.6
Confidence Score: 1.8
zeroshot-video-question-answer-on-msvd-qaVideo LLaMA-7B
Accuracy: 51.6
Confidence Score: 2.5

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding | Papers | HyperAI