Action Classification On Kinetics 600

评估指标

Top-1 Accuracy

评测结果

各个模型在此基准测试上的表现结果

Paper TitleRepository
InternVideo2-6B91.9InternVideo2: Scaling Foundation Models for Multimodal Video Understanding
TubeVit-H91.8Rethinking Video ViTs: Sparse Video Tubes for Joint Image and Video Learning
InternVideo2-1B91.6InternVideo2: Scaling Foundation Models for Multimodal Video Understanding
TubeVit-L91.5Rethinking Video ViTs: Sparse Video Tubes for Joint Image and Video Learning
InternVideo-T91.3InternVideo: General Video Foundation Models via Generative and Discriminative Learning
模型 4591.1MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound-
TubeVit-B90.9Rethinking Video ViTs: Sparse Video Tubes for Joint Image and Video Learning
UMT-L (ViT-L/16)90.5Unmasked Teacher: Towards Training-Efficient Video Foundation Models
MTV-H (WTS 60M)90.3Multiview Transformers for Video Recognition
UniFormerV2-L90.1UniFormerV2: Spatiotemporal Learning by Arming Image ViTs with Video UniFormer-
VideoMAE V2-g (64x266x266)89.9VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking
mPLUG-289.8mPLUG-2: A Modularized Multi-modal Foundation Model Across Text, Image and Video
EVA89.8%EVA: Exploring the Limits of Masked Visual Representation Learning at Scale
模型 1189.7MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound-
CoCa (finetuned)89.4CoCa: Contrastive Captioners are Image-Text Foundation Models
模型 5589.4MERLOT Reserve: Neural Script Knowledge through Vision and Language and Sound-
VideoMAE V2-g88.8VideoMAE V2: Scaling Video Masked Autoencoders with Dual Masking
Hiera-H (no extra data)88.8Hiera: A Hierarchical Vision Transformer without the Bells-and-Whistles
CoCa (frozen)88.5CoCa: Contrastive Captioners are Image-Text Foundation Models
X-CLIP(ViT-L/14, CLIP)88.3Expanding Language-Image Pretrained Models for General Video Recognition
0 of 65 row(s) selected.
Action Classification On Kinetics 600 | SOTA | HyperAI超神经