Command Palette
Search for a command to run...
Peiyuan Zhang Kaichen Zhang Bo Li Guangtao Zeng Jingkang Yang Yuanhan Zhang Ziyue Wang Haoran Tan Chunyuan Li Ziwei Liu

Abstract
Video sequences offer valuable temporal information, but existing largemultimodal models (LMMs) fall short in understanding extremely long videos.Many works address this by reducing the number of visual tokens using visualresamplers. Alternatively, in this paper, we approach this problem from theperspective of the language model. By simply extrapolating the context lengthof the language backbone, we enable LMMs to comprehend orders of magnitude morevisual tokens without any video training. We call this phenomenon long contexttransfer and carefully ablate its properties. To effectively measure LMMs'ability to generalize to long contexts in the vision modality, we developV-NIAH (Visual Needle-In-A-Haystack), a purely synthetic long vision benchmarkinspired by the language model's NIAH test. Our proposed Long Video Assistant(LongVA) can process 2000 frames or over 200K visual tokens without additionalcomplexities. With its extended context length, LongVA achievesstate-of-the-art performance on Video-MME among 7B-scale models by denselysampling more input frames. Our work is open-sourced athttps://github.com/EvolvingLMMs-Lab/LongVA.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| visual-question-answering-vqa-on-vlm2-bench | LongVA-7B | Average Score on VLM2-bench (9 subtasks): 22.59 GC-mat: 14.29 GC-trk: 19.18 OC-cnt: 42.53 OC-cpr: 26.67 OC-grp: 18.50 PC-VID: 3.75 PC-cnt: 38.90 PC-cpr: 21.50 PC-grp: 18.00 |
| zero-shot-video-question-answer-on-next-qa | LongVA(32 frames) | Accuracy: 67.1 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.