
摘要
基础模型在计算机视觉领域的各种下游任务中最近表现出色。然而,现有的大多数视觉基础模型仅关注图像级别的预训练和适应,这在动态和复杂的视频级别理解任务中存在局限性。为填补这一空白,我们提出了通用视频基础模型——InternVideo,该模型结合了生成式和判别式自监督视频学习的优势。具体而言,InternVideo高效地探索了掩码视频建模和视频-语言对比学习作为预训练目标,并以可学习的方式有选择地协调这两种互补框架的视频表示,以提升多种视频应用的性能。无需复杂的附加组件,InternVideo在涵盖视频动作识别/检测、视频-语言对齐以及开放世界视频应用等广泛任务的39个视频数据集上取得了最先进的性能。特别是,我们的方法在具有挑战性的Kinetics-400和Something-Something V2基准测试中分别获得了91.1%和77.2%的Top-1准确率。所有这些结果都有效证明了我们的InternVideo在视频理解方面的通用性。代码将在https://github.com/OpenGVLab/InternVideo 上发布。
代码仓库
opengvlab/internvideo
官方
pytorch
GitHub 中提及
yingsen1/unimd
pytorch
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| action-classification-on-kinetics-400 | InternVideo | Acc@1: 91.1 |
| action-classification-on-kinetics-600 | InternVideo-T | Top-1 Accuracy: 91.3 |
| action-classification-on-kinetics-700 | InternVideo-T | Top-1 Accuracy: 84.0 |
| action-recognition-in-videos-on-something | InternVideo | Top-1 Accuracy: 77.2 |
| action-recognition-in-videos-on-something-1 | InternVideo | Top 1 Accuracy: 70.0 |
| action-recognition-on-ava-v2-2 | InternVideo | mAP: 41.01 |
| open-set-action-recognition-on-ucf-hmdb | InternVideo | AUROC: 85.48 |
| open-set-action-recognition-on-ucf101-mitv2 | InternVideo | AUROC: 91.85 |
| spatio-temporal-action-localization-on-ava | InternVideo | val mAP: 41.01 |
| temporal-action-localization-on-activitynet | InternVideo | mAP: 39.00 |
| temporal-action-localization-on-fineaction | InternVideo | mAP: 17.57 |
| temporal-action-localization-on-hacs | InternVideo | Average-mAP: 41.55 |
| temporal-action-localization-on-thumos14 | ActionFormer (InternVideo features) | Avg mAP (0.3:0.7): 71.58 |
| video-question-answering-on-situated | InternVideo | Average Accuracy: 58.7 |
| video-retrieval-on-activitynet | InternVideo | text-to-video R@1: 62.2 video-to-text R@1: 62.8 |
| video-retrieval-on-didemo | InternVideo | text-to-video R@1: 57.9 video-to-text R@1: 59.1 |
| video-retrieval-on-lsmdc | InternVideo | text-to-video R@1: 34.0 video-to-text R@1: 34.9 |
| video-retrieval-on-msr-vtt | InternVideo | text-to-video R@1: 55.2 video-to-text R@1: 57.9 |
| video-retrieval-on-msvd | InternVideo | text-to-video R@1: 58.4 video-to-text R@1: 76.3 |
| video-retrieval-on-vatex | InternVideo | text-to-video R@1: 71.1 video-to-text R@1: 87.2 |
| visual-question-answering-on-msrvtt-qa-1 | InternVideo | Accuracy: 0.471 |
| visual-question-answering-on-msvd-qa-1 | InternVideo | Accuracy: 0.555 |
| visual-question-answering-on-tgif-qa | InternVideo | Accuracy: 0.722 |
| zero-shot-video-question-answer-on-egoschema-1 | InternVideo | Accuracy: 32.1 |
| zero-shot-video-question-answer-on-star | InternVideo | Accuracy: 41.6 |
| zero-shot-video-question-answer-on-tvqa | InternVideo (no speech) | Accuracy: 35.9 |
| zero-shot-video-retrieval-on-activitynet | InternVideo | text-to-video R@1: 30.7 video-to-text R@1: 31.4 |
| zero-shot-video-retrieval-on-didemo | InternVideo | text-to-video R@1: 31.5 text-to-video R@10: 68.2 text-to-video R@5: 57.6 video-to-text R@1: 33.5 video-to-text R@10: 71.1 video-to-text R@5: 60.3 |
| zero-shot-video-retrieval-on-lsmdc | InternVideo | text-to-video R@1: 17.6 text-to-video R@10: 40.2 text-to-video R@5: 32.4 video-to-text R@1: 13.2 video-to-text R@10: 34.9 video-to-text R@5: 27.8 |
| zero-shot-video-retrieval-on-msr-vtt | InternVideo | text-to-video R@1: 40.7 video-to-text R@1: 39.6 |
| zero-shot-video-retrieval-on-msvd | InternVideo | text-to-video R@1: 43.4 video-to-text R@1: 67.6 |
| zero-shot-video-retrieval-on-vatex | InternVideo | text-to-video R@1: 49.5 video-to-text R@1: 69.5 |