
摘要
视觉-语言预训练显著提升了广泛图像-语言应用的性能。然而,针对视频相关任务的预训练过程需要异常庞大的计算和数据资源,这阻碍了视频-语言模型的发展。本文研究了一种简单、高效且资源消耗低的方法,用于将现有的图像-语言预训练模型适应于密集视频理解。初步实验表明,直接在视频数据集上使用多帧作为输入对预训练的图像-语言模型进行微调会导致性能饱和甚至下降。进一步的研究发现,这主要是由于学习到的高范数视觉特征存在偏差所致。受此发现的启发,我们提出了一种简单但有效的池化策略,以平滑沿时间维度的特征分布,从而减少极端特征的主导影响。新模型被称为池化LLaVA(Pooling LLaVA),简称PLLaVA,在现代基准数据集上的视频问答和字幕生成任务中均达到了新的最先进水平。特别是在最近流行的Video ChatGPT基准测试中,PLLaVA在五个评估维度上的平均得分为3.48(满分5分),比之前的最佳结果GPT4V(IG-VLM)高出9%。在最新的多项选择基准测试MVBench中,PLLaVA在20个子任务上的平均准确率为58.1%,比GPT4V(IG-VLM)高出14.5%。代码可在https://github.com/magic-research/PLLaVA 获取。
代码仓库
magic-research/PLLaVA
官方
pytorch
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| video-based-generative-performance | PLLaVA-34B | Consistency: 3.25 Contextual Understanding: 3.90 Correctness of Information: 3.60 Detail Orientation: 3.20 Temporal Understanding: 2.67 mean: 3.32 |
| video-based-generative-performance-1 | PLLaVA-34B | gpt-score: 3.60 |
| video-based-generative-performance-2 | PLLaVA-34B | gpt-score: 3.25 |
| video-based-generative-performance-3 | PLLaVA-34B | gpt-score: 3.9 |
| video-based-generative-performance-4 | PLLaVA-34B | gpt-score: 3.20 |
| video-based-generative-performance-5 | PLLaVA-34B | gpt-score: 2.67 |
| video-question-answering-on-mvbench | PLLaVA | Avg.: 58.1 |
| video-question-answering-on-tvbench | PLLaVA-34B | Average Accuracy: 42.3 |
| video-question-answering-on-tvbench | PLLaVA-7B | Average Accuracy: 34.9 |
| video-question-answering-on-tvbench | PLLaVA-13B | Average Accuracy: 36.4 |
| zeroshot-video-question-answer-on-activitynet | PLLaVA (34B) | Accuracy: 60.9 Confidence Score: 3.7 |
| zeroshot-video-question-answer-on-msrvtt-qa | PLLaVA (34B) | Accuracy: 68.7 Confidence Score: 3.6 |
| zeroshot-video-question-answer-on-msvd-qa | PLLaVA (34B) | Accuracy: 79.9 Confidence Score: 4.2 |
| zeroshot-video-question-answer-on-tgif-qa | PLLaVA | Accuracy: 80.6 Confidence Score: 4.3 |