Command Palette
Search for a command to run...
Find First, Track Next: Decoupling Identification and Propagation in Referring Video Object Segmentation
Cho Suhwan Lee Seunghoon Lee Minhyeok Lee Jungho Lee Sangyoun

Abstract
Referring video object segmentation aims to segment and track a target objectin a video using a natural language prompt. Existing methods typically fusevisual and textual features in a highly entangled manner, processingmulti-modal information together to generate per-frame masks. However, thisapproach often struggles with ambiguous target identification, particularly inscenes with multiple similar objects, and fails to ensure consistent maskpropagation across frames. To address these limitations, we introduceFindTrack, a novel decoupled framework that separates target identificationfrom mask propagation. FindTrack first adaptively selects a key frame bybalancing segmentation confidence and vision-text alignment, establishing arobust reference for the target object. This reference is then utilized by adedicated propagation module to track and segment the object across the entirevideo. By decoupling these processes, FindTrack effectively reduces ambiguitiesin target association and enhances segmentation consistency. We demonstratethat FindTrack outperforms existing methods on public benchmarks.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| referring-video-object-segmentation-on-mevis | FindTrack | F: 50.7 J: 45.6 Ju0026F: 48.2 |
| referring-video-object-segmentation-on-ref | FindTrack | F: 78.5 J: 69.9 Ju0026F: 74.2 |
| referring-video-object-segmentation-on-refer | FindTrack | F: 72.0 J: 68.6 Ju0026F: 70.3 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.