
摘要
我们研究的是半监督语义分割任务,目标是在仅提供少量人工标注训练样本的情况下,生成像素级的语义对象掩码。我们重点关注迭代自训练方法,并探究在多个优化阶段中自训练行为的表现。研究发现,若以固定的人工标注样本与伪标注样本比例进行简单迭代自训练,会导致性能下降。为此,我们提出两种新策略:贪心迭代自训练(Greedy Iterative Self-Training, GIST)和随机迭代自训练(Random Iterative Self-Training, RIST),它们在每个优化阶段交替使用人工标注数据或伪标注数据进行训练,从而实现性能提升而非退化。进一步实验表明,GIST与RIST可与现有的半监督学习方法有效结合,显著提升整体性能。
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| semi-supervised-semantic-segmentation-on-1 | GIST and RIST (DeepLabv2 with ResNet101, MSCOCO pre-trained) | Validation mIoU: 65.14% |
| semi-supervised-semantic-segmentation-on-18 | GIST and RIST (DeepLabv2 with ResNet101, MSCOCO pre-trained) | Validation mIoU: 53.51% |
| semi-supervised-semantic-segmentation-on-19 | GIST and RIST (DeepLabv2 with ResNet101, MSCOCO pre-trained) | Validation mIoU: 59.98% |
| semi-supervised-semantic-segmentation-on-2 | GIST and RIST (DeepLabv2 with ResNet101, MSCOCO pre-trained) | Validation mIoU: 62.57% |
| semi-supervised-semantic-segmentation-on-3 | GIST and RIST (DeepLabv2 with ResNet101, MSCOCO pre-trained) | Validation mIoU: 58.70% |
| semi-supervised-semantic-segmentation-on-4 | GIST and RIST | Validation mIoU: 70.76% |
| semi-supervised-semantic-segmentation-on-5 | GIST and RIST (DeepLabv2 with ResNet101, MSCOCO pre-trained) | Validation mIoU: 69.40% |
| semi-supervised-semantic-segmentation-on-6 | GIST and RIST (DeepLabv2 with ResNet101, MSCOCO pre-trained) | Validation mIoU: 67.21% |