
摘要
视频对象分割的目标是在给定仅标注了第一帧的情况下,对整个视频序列中的特定对象进行分割。近年来,基于深度学习的方法通过在标注帧上使用数百次梯度下降迭代来微调通用分割模型,证明了其有效性。尽管这些方法取得了高精度,但微调过程效率低下,无法满足实际应用的需求。我们提出了一种新颖的方法,该方法只需一次前向传播即可使分割模型适应特定对象的外观。具体而言,我们训练了一个名为调制器(modulator)的第二元神经网络,利用目标对象有限的视觉和空间信息来操纵分割网络的中间层。实验结果表明,我们的方法比微调方法快70倍,同时达到了类似的精度。
代码仓库
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| one-shot-visual-object-segmentation-on | OSMN | Jaccard (Seen): 60.0 |
| semi-supervised-video-object-segmentation-on-1 | OSMN | F-measure (Decay): 17.4 F-measure (Recall): 47.4 Ju0026F: 41.3 Jaccard (Decay): 19.0 Jaccard (Mean): 37.7 Jaccard (Recall): 38.9 |
| video-instance-segmentation-on-youtube-vis-1 | OSMN | AP50: 28.6 AP75: 33.1 mask AP: 29.1 |
| video-object-segmentation-on-youtube-vos | OSMN | F-Measure (Seen): 60.1 F-Measure (Unseen): 44.0 Jaccard (Seen): 60.0 Jaccard (Unseen): 40.6 Overall: 51.2 Speed (FPS): 7.14 |
| visual-object-tracking-on-davis-2016 | OSMN | F-measure (Decay): 10.6 F-measure (Mean): 72.9 F-measure (Recall): 84.0 Ju0026F: 73.45 Jaccard (Decay): 9.0 Jaccard (Mean): 74.0 Jaccard (Recall): 87.6 |
| visual-object-tracking-on-davis-2017 | OSMN | F-measure (Decay): 24.3 F-measure (Mean): 57.1 F-measure (Recall): 66.1 Ju0026F: 54.8 Jaccard (Decay): 21.5 Jaccard (Mean): 52.5 Jaccard (Recall): 60.9 |
| visual-object-tracking-on-youtube-vos | OSMN | F-Measure (Seen): 60.1 F-Measure (Unseen): 44.0 Jaccard (Seen): 60.0 O (Average of Measures): 51.2 |