Command Palette
Search for a command to run...
LLaVA-MR: Large Language-and-Vision Assistant for Video Moment Retrieval
Lu Weiheng ; Li Jian ; Yu An ; Chang Ming-Ching ; Ji Shengpeng ; Xia Min

Abstract
Multimodal Large Language Models (MLLMs) are widely used for visualperception, understanding, and reasoning. However, long video processing andprecise moment retrieval remain challenging due to LLMs' limited context sizeand coarse frame extraction. We propose the Large Language-and-Vision Assistantfor Moment Retrieval (LLaVA-MR), which enables accurate moment retrieval andcontextual grounding in videos using MLLMs. LLaVA-MR combines Dense Frame andTime Encoding (DFTE) for spatial-temporal feature extraction, Informative FrameSelection (IFS) for capturing brief visual and motion patterns, and DynamicToken Compression (DTC) to manage LLM context limitations. Evaluations onbenchmarks like Charades-STA and QVHighlights demonstrate that LLaVA-MRoutperforms 11 state-of-the-art methods, achieving an improvement of 1.82% inR1@0.5 and 1.29% in mAP@0.5 on the QVHighlights dataset. Our implementationwill be open-sourced upon acceptance.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| moment-retrieval-on-charades-sta | LLaVA-MR | R@1 IoU=0.5: 70.65 R@1 IoU=0.7: 49.58 |
| moment-retrieval-on-qvhighlights | LLaVA-MR | R@1 IoU=0.5: 76.59 R@1 IoU=0.7: 61.48 mAP: 52.73 mAP@0.5: 69.41 mAP@0.75: 54.40 |
| natural-language-moment-retrieval-on | LLaVA-MR | R@1,IoU=0.5: 55.16 R@1,IoU=0.7: 35.68 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.