Command Palette
Search for a command to run...
Suhwan Cho; Heansung Lee; Minhyeok Lee; Chaewon Park; Sungjun Jang; Minjung Kim; Sangyoun Lee

Abstract
Semi-supervised video object segmentation (VOS) aims to densely track certain designated objects in videos. One of the main challenges in this task is the existence of background distractors that appear similar to the target objects. We propose three novel strategies to suppress such distractors: 1) a spatio-temporally diversified template construction scheme to obtain generalized properties of the target objects; 2) a learnable distance-scoring function to exclude spatially-distant distractors by exploiting the temporal consistency between two consecutive frames; 3) swap-and-attach augmentation to force each object to have unique features by providing training samples containing entangled objects. On all public benchmark datasets, our model achieves a comparable performance to contemporary state-of-the-art approaches, even with real-time performance. Qualitative results also demonstrate the superiority of our approach over existing methods. We believe our approach will be widely used for future VOS research.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| semi-supervised-video-object-segmentation-on-20 | TBD | D16 val (F): 86.2 D16 val (G): 86.8 D16 val (J): 87.5 D17 test (F): 72.2 D17 test (G): 69.4 D17 test (J): 66.6 D17 val (F): 82.3 D17 val (G): 80.0 D17 val (J): 77.6 FPS: 50.1 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.