
摘要
深度学习方法如今被广泛应用于解决计算机视觉任务,如语义分割,这些任务需要大规模的数据集和大量的计算资源。连续学习在语义分割中的应用(Continual Learning for Semantic Segmentation, CSS)是一种新兴趋势,它通过顺序添加新类别来更新旧模型。然而,连续学习方法通常容易遭受灾难性遗忘的问题。在CSS中,这一问题进一步加剧,因为在每一步中,前一次迭代中的旧类别会被合并到背景中。本文提出了一种多尺度池化蒸馏方案——局部POD(Local POD),该方案能够在特征层面上保留长距离和短距离的空间关系。此外,我们设计了一种基于熵的背景伪标签生成方法,该方法针对旧模型预测的类别进行背景偏移处理,以避免旧类别的灾难性遗忘。我们的方法称为PLOP,在现有的CSS场景以及新提出的具有挑战性的基准测试中均显著优于现有最先进方法。
代码仓库
mostafaelaraby/bacs-continual-semantic-segmentation
pytorch
GitHub 中提及
arthurdouillard/CVPR2021_PLOP
官方
pytorch
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| disjoint-10-1-on-pascal-voc-2012 | PLOP | mIoU: 8.4 |
| disjoint-15-1-on-pascal-voc-2012 | PLOP | mIoU: 46.5 |
| disjoint-15-5-on-pascal-voc-2012 | PLOP | Mean IoU: 64.3 |
| overlapped-10-1-on-pascal-voc-2012 | PLOP | mIoU: 30.45 |
| overlapped-100-10-on-ade20k | PLOP | Mean IoU (test) : 31.59 |
| overlapped-100-10-on-ade20k | MiB | Mean IoU (test) : 29.24 |
| overlapped-100-5-on-ade20k | MiB | mIoU: 25.96 |
| overlapped-100-5-on-ade20k | PLOP | mIoU: 28.75 |
| overlapped-100-50-on-ade20k | PLOP | mIoU: 32.94 |
| overlapped-100-50-on-ade20k | MiB | mIoU: 32.79 |
| overlapped-15-1-on-pascal-voc-2012 | MiB | mIoU: 29.29 |
| overlapped-15-1-on-pascal-voc-2012 | PLOP | mIoU: 54.64 |
| overlapped-15-5-on-pascal-voc-2012 | MiB | Mean IoU (val): 70.08 |
| overlapped-15-5-on-pascal-voc-2012 | PLOP | Mean IoU (val): 70.09 |
| overlapped-50-50-on-ade20k | MiB | mIoU: 29.31 |
| overlapped-50-50-on-ade20k | PLOP | mIoU: 30.4 |