Video Question Answering On Next Qa

评估指标

Accuracy

评测结果

各个模型在此基准测试上的表现结果

Paper TitleRepository
LinVT-Qwen2-VL (7B)85.5LinVT: Empower Your Image-level Large Language Model to Understand Videos
InternVL-2.5(8B)85.5Expanding Performance Boundaries of Open-Source Multimodal Models with Model, Data, and Test-Time Scaling
VideoLLaMA3(7B)84.5VideoLLaMA 3: Frontier Multimodal Foundation Models for Image and Video Understanding
BIMBA-LLaVA-Qwen2-7B83.73BIMBA: Selective-Scan Compression for Long-Range Video Question Answering
LLaVA-Video83.2Video Instruction Tuning With Synthetic Data-
NVILA(8B)82.2NVILA: Efficient Frontier Visual Language Models
Oryx-1.5(7B)81.8Oryx MLLM: On-Demand Spatial-Temporal Understanding at Arbitrary Resolution
Qwen2-VL(7B)81.2Qwen2-VL: Enhancing Vision-Language Model's Perception of the World at Any Resolution
LongVILA(7B)80.7LongVILA: Scaling Long-Context Visual Language Models for Long Videos
LLaVA-OV(72B)80.2LLaVA-OneVision: Easy Visual Task Transfer
VideoChat2_HD_mistral79.5MVBench: A Comprehensive Multi-modal Video Understanding Benchmark
LLaVA-OV(7B)79.4LLaVA-OneVision: Easy Visual Task Transfer
LLaVA-NeXT-Interleave(14B)79.1LLaVA-NeXT-Interleave: Tackling Multi-image, Video, and 3D in Large Multimodal Models
VideoChat2_mistral78.6MVBench: A Comprehensive Multi-modal Video Understanding Benchmark
mPLUG-Owl3(8B)78.6mPLUG-Owl3: Towards Long Image-Sequence Understanding in Multi-Modal Large Language Models
LLaVA-NeXT-Interleave(7B)78.2LLaVA-NeXT-Interleave: Tackling Multi-image, Video, and 3D in Large Multimodal Models
LLaVA-NeXT-Interleave(DPO)77.9LLaVA-NeXT-Interleave: Tackling Multi-image, Video, and 3D in Large Multimodal Models
Vamos77.3Vamos: Versatile Action Models for Video Understanding
ViLA (3B)75.6ViLA: Efficient Video-Language Alignment for Video Question Answering
VideoLLaMA2.1(7B)75.6VideoLLaMA 2: Advancing Spatial-Temporal Modeling and Audio Understanding in Video-LLMs
0 of 44 row(s) selected.