
摘要
得益于掩码视觉建模,自监督视频表示学习取得了显著进展。然而,现有的方法主要集中在通过重建低级特征(如原始像素RGB值)从零开始学习表示。在本文中,我们提出了一种简单而有效的两阶段掩码特征建模框架——掩码视频蒸馏(Masked Video Distillation, MVD),用于视频表示学习:首先,我们通过恢复掩码块的低级特征来预训练图像(或视频)模型,然后将生成的特征作为掩码特征建模的目标。对于教师模型的选择,我们观察到由视频教师指导的学生在时间密集型视频任务上表现更好,而图像教师则为空间密集型视频任务传递更强的空间表示。可视化分析也表明不同的教师会产生不同的学生学习模式。基于这一观察结果,我们设计了一种空间-时间协同教学方法用于MVD。具体而言,我们通过掩码特征建模从视频教师和图像教师中提取学生模型。大量的实验结果表明,在多个视频数据集上,采用空间-时间协同教学预训练的视频Transformer优于单个教师蒸馏的模型。我们的MVD与基础版ViT相比,在几个具有挑战性的视频下游任务中达到了最先进的性能。例如,使用ViT-Large模型时,我们的MVD在Kinetics-400和Something-Something-v2数据集上的Top-1准确率分别达到86.4%和76.7%,分别比VideoMAE高出1.2%和2.4%。当采用更大的ViT-Huge模型时,MVD在Something-Something-v2数据集上的Top-1准确率达到77.3%,在AVA v2.2数据集上的mAP达到41.1%,均达到了当前最佳水平。代码将在\url{https://github.com/ruiwang2021/mvd}提供。
代码仓库
Mind23-2/MindCode-101/tree/main/MVD
mindspore
ruiwang2021/mvd
官方
pytorch
GitHub 中提及
Mind23-2/MindCode-3/tree/main/MVD
mindspore
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| action-classification-on-kinetics-400 | MVD (K400 pretrain, ViT-B, 16x224x224) | Acc@1: 83.4 Acc@5: 95.8 |
| action-classification-on-kinetics-400 | MVD (K400 pretrain, ViT-H, 16x224x224) | Acc@1: 87.2 Acc@5: 97.4 |
| action-classification-on-kinetics-400 | MVD (K400 pretrain, ViT-S, 16x224x224) | Acc@1: 81.0 Acc@5: 94.8 |
| action-classification-on-kinetics-400 | MVD (K400 pretrain, ViT-L, 16x224x224) | Acc@1: 86.4 Acc@5: 97.0 |
| action-recognition-in-videos-on-something | MVD (Kinetics400 pretrain, ViT-H, 16 frame) | GFLOPs: 1192x6 Parameters: 633 Top-1 Accuracy: 77.3 Top-5 Accuracy: 95.7 |
| action-recognition-in-videos-on-something | MVD (Kinetics400 pretrain, ViT-S, 16 frame) | GFLOPs: 57x6 Parameters: 22 Top-1 Accuracy: 70.9 Top-5 Accuracy: 92.8 |
| action-recognition-in-videos-on-something | MVD (Kinetics400 pretrain, ViT-L, 16 frame) | GFLOPs: 597x6 Parameters: 305 Top-1 Accuracy: 76.7 Top-5 Accuracy: 95.5 |
| action-recognition-in-videos-on-something | MVD (Kinetics400 pretrain, ViT-B, 16 frame) | GFLOPs: 180x6 Parameters: 87 Top-1 Accuracy: 73.7 Top-5 Accuracy: 94.0 |
| action-recognition-on-ava-v2-2 | MVD (Kinetics400 pretrain, ViT-B, 16x4) | mAP: 31.1 |
| action-recognition-on-ava-v2-2 | MVD (Kinetics400 pretrain+finetune, ViT-L, 16x4) | mAP: 38.7 |
| action-recognition-on-ava-v2-2 | MVD (Kinetics400 pretrain+finetune, ViT-B, 16x4) | mAP: 34.2 |
| action-recognition-on-ava-v2-2 | MVD (Kinetics400 pretrain, ViT-L, 16x4) | mAP: 37.7 |
| action-recognition-on-ava-v2-2 | MVD (Kinetics400 pretrain+finetune, ViT-H, 16x4) | mAP: 41.1 |
| action-recognition-on-ava-v2-2 | MVD (Kinetics400 pretrain, ViT-H, 16x4) | mAP: 40.1 |
| self-supervised-action-recognition-on-hmdb51 | MVD (ViT-B) | Frozen: false Pre-Training Dataset: Kinetics400 Top-1 Accuracy: 79.7 |
| self-supervised-action-recognition-on-ucf101 | MVD (ViT-B) | 3-fold Accuracy: 97.5 Frozen: false Pre-Training Dataset: Kinetics400 |