Command Palette
Search for a command to run...
Miran Heo Sukjun Hwang Seoung Wug Oh Joon-Young Lee Seon Joo Kim

Abstract
We introduce a novel paradigm for offline Video Instance Segmentation (VIS), based on the hypothesis that explicit object-oriented information can be a strong clue for understanding the context of the entire sequence. To this end, we propose VITA, a simple structure built on top of an off-the-shelf Transformer-based image instance segmentation model. Specifically, we use an image object detector as a means of distilling object-specific contexts into object tokens. VITA accomplishes video-level understanding by associating frame-level object tokens without using spatio-temporal backbone features. By effectively building relationships between objects using the condensed information, VITA achieves the state-of-the-art on VIS benchmarks with a ResNet-50 backbone: 49.8 AP, 45.7 AP on YouTube-VIS 2019 & 2021, and 19.6 AP on OVIS. Moreover, thanks to its object token-based structure that is disjoint from the backbone features, VITA shows several practical advantages that previous offline VIS methods have not explored - handling long and high-resolution videos with a common GPU, and freezing a frame-level detector trained on image domain. Code is available at https://github.com/sukjunhwang/VITA.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| video-instance-segmentation-on-ovis-1 | VITA (Swin-L) | AP50: 51.9 AP75: 24.9 AR1: 14.9 AR10: 33.0 mask AP: 27.7 |
| video-instance-segmentation-on-youtube-vis-2 | VITA (Swin-L) | AP50: 80.6 AP75: 61.0 AR1: 47.7 AR10: 62.6 mask AP: 57.5 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.