
摘要
在大规模数据集上预训练视频变换器通常是为了在相对较小的数据集上取得最佳性能。本文中,我们展示了视频掩码自编码器(VideoMAE)是用于自我监督视频预训练(SSVP)的有效数据学习者。我们的灵感来源于最近的ImageMAE,并提出了具有极高掩码比例的定制化视频管掩码。这一简单设计使得视频重建成为一个更具挑战性的自我监督任务,从而在预训练过程中促进提取更有效的视频表示。我们在SSVP方面获得了三个重要的发现:(1) 极高的掩码比例(即90%到95%)仍然能够使VideoMAE获得良好的性能。时间冗余的视频内容使得其可以使用比图像更高的掩码比例。(2) VideoMAE在非常小的数据集(即大约3000至4000个视频)上取得了令人印象深刻的结果,而无需使用任何额外数据。(3) VideoMAE表明,在SSVP中数据质量比数据量更重要。预训练数据集和目标数据集之间的域偏移是一个重要问题。值得注意的是,我们的VideoMAE结合普通的ViT可以在Kinetics-400上达到87.4%,在Something-Something V2上达到75.4%,在UCF101上达到91.3%,以及在HMDB51上达到62.6%,且无需使用任何额外数据。代码可在https://github.com/MCG-NJU/VideoMAE 获取。
代码仓库
MCG-NJU/VideoMAE-Action-Detection
官方
pytorch
GitHub 中提及
huggingface/transformers
pytorch
GitHub 中提及
pwc-1/Paper-9/tree/main/5/videomae
mindspore
innat/VideoMAE
tf
GitHub 中提及
MCG-NJU/VideoMAE
官方
pytorch
GitHub 中提及
MS-P3/code7/tree/main/videomae
mindspore
MindCode-4/code-1/tree/main/videomae
mindspore
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| action-classification-on-kinetics-400 | VideoMAE (no extra data, ViT-H, 32x320x320) | Acc@1: 87.4 Acc@5: 97.6 |
| action-classification-on-kinetics-400 | VideoMAE (no extra data, ViT-H) | Acc@1: 86.6 Acc@5: 97.1 |
| action-classification-on-kinetics-400 | VideoMAE (no extra data, ViT-B, 16x4) | Acc@1: 81.5 Acc@5: 95.1 |
| action-classification-on-kinetics-400 | VideoMAE (no extra data, ViT-L, 32x320x320) | Acc@1: 86.1 Acc@5: 97.3 |
| action-classification-on-kinetics-400 | VideoMAE (no extra data, ViT-L, 16x4) | Acc@1: 85.2 Acc@5: 96.8 |
| action-recognition-in-videos-on-something | VideoMAE (no extra data, ViT-B, 16frame) | GFLOPs: 180x6 Parameters: 87 Top-1 Accuracy: 70.8 Top-5 Accuracy: 92.4 |
| action-recognition-in-videos-on-something | VideoMAE (no extra data, ViT-L, 32x2) | GFLOPs: 1436x3 Parameters: 305 Top-1 Accuracy: 75.4 Top-5 Accuracy: 95.2 |
| action-recognition-in-videos-on-something | VideoMAE (no extra data, ViT-L, 16frame) | GFLOPs: 597x6 Parameters: 305 Top-1 Accuracy: 74.3 Top-5 Accuracy: 94.6 |
| action-recognition-on-ava-v2-2 | VideoMAE (K700 pretrain, ViT-L, 16x4) | mAP: 36.1 |
| action-recognition-on-ava-v2-2 | VideoMAE (K400 pretrain, ViT-B, 16x4) | mAP: 26.7 |
| action-recognition-on-ava-v2-2 | VideoMAE (K400 pretrain+finetune, ViT-H, 16x4) | mAP: 39.5 |
| action-recognition-on-ava-v2-2 | VideoMAE (K400 pretrain, ViT-L, 16x4) | mAP: 34.3 |
| action-recognition-on-ava-v2-2 | VideoMAE (K700 pretrain+finetune, ViT-L, 16x4) | mAP: 39.3 |
| action-recognition-on-ava-v2-2 | VideoMAE (K400 pretrain+finetune, ViT-L, 16x4) | mAP: 37.8 |
| action-recognition-on-ava-v2-2 | VideoMAE (K400 pretrain+finetune, ViT-B, 16x4) | mAP: 31.8 |
| action-recognition-on-ava-v2-2 | VideoMAE (K400 pretrain, ViT-H, 16x4) | mAP: 36.5 |
| self-supervised-action-recognition-on-hmdb51 | VideoMAE | Frozen: false Pre-Training Dataset: Kinetics400 Top-1 Accuracy: 73.3 |
| self-supervised-action-recognition-on-hmdb51 | VideoMAE(no extra data) | Frozen: false Pre-Training Dataset: no extra data Top-1 Accuracy: 62.6 |
| self-supervised-action-recognition-on-ucf101 | VideoMAE(no extra data) | 3-fold Accuracy: 91.3 Frozen: false Pre-Training Dataset: no extra data |
| self-supervised-action-recognition-on-ucf101 | VideoMAE | 3-fold Accuracy: 96.1 Frozen: false Pre-Training Dataset: Kinetics400 |