
摘要
本文针对语义分割问题,重点研究上下文聚合策略。我们的动机是:一个像素的类别标签,即其所属于物体的类别。为此,我们提出一种简单而有效的方法——对象上下文表征(object-contextual representations),通过利用对应物体类别的表征来刻画每个像素。具体而言,首先在真实标注分割的监督下学习物体区域;其次,通过聚合位于该物体区域内的所有像素的表征,计算出该物体区域的表征;最后,计算每个像素与每个物体区域之间的表征相似性,并将每个像素的表征通过加权聚合所有物体区域表征的方式进行增强,其中权重由像素与各物体区域之间的关系决定。我们通过实验证明,所提出的方法在多个具有挑战性的语义分割基准数据集上均取得了具有竞争力的性能,包括Cityscapes、ADE20K、LIP、PASCAL-Context以及COCO-Stuff。截至提交时,我们的参赛方案“HRNet + OCR + SegFix”在Cityscapes排行榜上位列第一。相关代码已开源,地址为:https://git.io/openseg 和 https://git.io/HRNet.OCR。此外,我们进一步基于Transformer编码器-解码器框架对对象上下文表征机制进行了重新表述,具体细节见第3.3节。
代码仓库
PaddlePaddle/PaddleSeg
paddle
openseg-group/openseg.pytorch
pytorch
GitHub 中提及
Burf/HRNetV2-OCR-Tensorflow2
tf
GitHub 中提及
HRNet/HRNet-Semantic-Segmentation
官方
pytorch
GitHub 中提及
open-mmlab/mmsegmentation
pytorch
rosinality/ocr-pytorch
pytorch
GitHub 中提及
kingcong/OCRNet
mindspore
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| semantic-segmentation-on-ade20k | OCR (ResNet-101) | Validation mIoU: 45.28 |
| semantic-segmentation-on-ade20k | HRNetV2 + OCR + RMI (PaddleClas pretrained) | Validation mIoU: 47.98 |
| semantic-segmentation-on-ade20k | OCR(HRNetV2-W48) | Validation mIoU: 45.66 |
| semantic-segmentation-on-ade20k-val | OCR (ResNet-101) | mIoU: 45.28 |
| semantic-segmentation-on-ade20k-val | OCR (HRNetV2-W48) | mIoU: 45.66 |
| semantic-segmentation-on-ade20k-val | HRNetV2 + OCR + RMI (PaddleClas pretrained) | mIoU: 47.98 |
| semantic-segmentation-on-bdd100k-val | OCRNet | mIoU: 60.1 |
| semantic-segmentation-on-cityscapes | HRNetV2 + OCR (w/ ASP) | Mean IoU (class): 83.7% |
| semantic-segmentation-on-cityscapes | OCR (ResNet-101, coarse) | Mean IoU (class): 82.4% |
| semantic-segmentation-on-cityscapes | OCR (HRNetV2-W48, coarse) | Mean IoU (class): 83.0% |
| semantic-segmentation-on-cityscapes | OCR (ResNet-101) | Mean IoU (class): 81.8% |
| semantic-segmentation-on-cityscapes | HRNetV2 + OCR + | Mean IoU (class): 84.5% |
| semantic-segmentation-on-cityscapes-val | HRNetV2 + OCR + RMI (PaddleClas pretrained) | mIoU: 83.6 |
| semantic-segmentation-on-cityscapes-val | OCR (ResNet-101-FCN) | mIoU: 80.6 |
| semantic-segmentation-on-coco-stuff-test | OCR (ResNet-101) | mIoU: 39.5% |
| semantic-segmentation-on-coco-stuff-test | OCR (HRNetV2-W48) | mIoU: 40.5% |
| semantic-segmentation-on-coco-stuff-test | HRNetV2 + OCR + RMI (PaddleClas pretrained) | mIoU: 45.2% |
| semantic-segmentation-on-lip-val | HRNetV2 + OCR + RMI (PaddleClas pretrained) | mIoU: 58.2% |
| semantic-segmentation-on-lip-val | OCR (ResNet-101) | mIoU: 55.6% |
| semantic-segmentation-on-lip-val | OCR (HRNetV2-W48) | mIoU: 56.65% |
| semantic-segmentation-on-pascal-context | OCR (HRNetV2-W48) | mIoU: 56.2 |
| semantic-segmentation-on-pascal-context | HRNetV2 + OCR + RMI (PaddleClas pretrained) | mIoU: 59.6 |
| semantic-segmentation-on-pascal-context | OCR (ResNet-101) | mIoU: 54.8 |
| semantic-segmentation-on-pascal-voc-2012 | OCR (ResNet-101) | Mean IoU: 84.3% |
| semantic-segmentation-on-pascal-voc-2012 | OCR (HRNetV2-W48) | Mean IoU: 84.5% |