Command Palette
Search for a command to run...
Segmentation Transformer: Object-Contextual Representations for Semantic Segmentation
Yuhui Yuan Xiaokang Chen Xilin Chen Jingdong Wang

Abstract
In this paper, we address the semantic segmentation problem with a focus on the context aggregation strategy. Our motivation is that the label of a pixel is the category of the object that the pixel belongs to. We present a simple yet effective approach, object-contextual representations, characterizing a pixel by exploiting the representation of the corresponding object class. First, we learn object regions under the supervision of ground-truth segmentation. Second, we compute the object region representation by aggregating the representations of the pixels lying in the object region. Last, % the representation similarity we compute the relation between each pixel and each object region and augment the representation of each pixel with the object-contextual representation which is a weighted aggregation of all the object region representations according to their relations with the pixel. We empirically demonstrate that the proposed approach achieves competitive performance on various challenging semantic segmentation benchmarks: Cityscapes, ADE20K, LIP, PASCAL-Context, and COCO-Stuff. Cityscapes, ADE20K, LIP, PASCAL-Context, and COCO-Stuff. Our submission "HRNet + OCR + SegFix" achieves 1-st place on the Cityscapes leaderboard by the time of submission. Code is available at: https://git.io/openseg and https://git.io/HRNet.OCR. We rephrase the object-contextual representation scheme using the Transformer encoder-decoder framework. The details are presented in~Section3.3.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| semantic-segmentation-on-ade20k | OCR (ResNet-101) | Validation mIoU: 45.28 |
| semantic-segmentation-on-ade20k | HRNetV2 + OCR + RMI (PaddleClas pretrained) | Validation mIoU: 47.98 |
| semantic-segmentation-on-ade20k | OCR(HRNetV2-W48) | Validation mIoU: 45.66 |
| semantic-segmentation-on-ade20k-val | OCR (ResNet-101) | mIoU: 45.28 |
| semantic-segmentation-on-ade20k-val | OCR (HRNetV2-W48) | mIoU: 45.66 |
| semantic-segmentation-on-ade20k-val | HRNetV2 + OCR + RMI (PaddleClas pretrained) | mIoU: 47.98 |
| semantic-segmentation-on-bdd100k-val | OCRNet | mIoU: 60.1 |
| semantic-segmentation-on-cityscapes | HRNetV2 + OCR (w/ ASP) | Mean IoU (class): 83.7% |
| semantic-segmentation-on-cityscapes | OCR (ResNet-101, coarse) | Mean IoU (class): 82.4% |
| semantic-segmentation-on-cityscapes | OCR (HRNetV2-W48, coarse) | Mean IoU (class): 83.0% |
| semantic-segmentation-on-cityscapes | OCR (ResNet-101) | Mean IoU (class): 81.8% |
| semantic-segmentation-on-cityscapes | HRNetV2 + OCR + | Mean IoU (class): 84.5% |
| semantic-segmentation-on-cityscapes-val | HRNetV2 + OCR + RMI (PaddleClas pretrained) | mIoU: 83.6 |
| semantic-segmentation-on-cityscapes-val | OCR (ResNet-101-FCN) | mIoU: 80.6 |
| semantic-segmentation-on-coco-stuff-test | OCR (ResNet-101) | mIoU: 39.5% |
| semantic-segmentation-on-coco-stuff-test | OCR (HRNetV2-W48) | mIoU: 40.5% |
| semantic-segmentation-on-coco-stuff-test | HRNetV2 + OCR + RMI (PaddleClas pretrained) | mIoU: 45.2% |
| semantic-segmentation-on-lip-val | HRNetV2 + OCR + RMI (PaddleClas pretrained) | mIoU: 58.2% |
| semantic-segmentation-on-lip-val | OCR (ResNet-101) | mIoU: 55.6% |
| semantic-segmentation-on-lip-val | OCR (HRNetV2-W48) | mIoU: 56.65% |
| semantic-segmentation-on-pascal-context | OCR (HRNetV2-W48) | mIoU: 56.2 |
| semantic-segmentation-on-pascal-context | HRNetV2 + OCR + RMI (PaddleClas pretrained) | mIoU: 59.6 |
| semantic-segmentation-on-pascal-context | OCR (ResNet-101) | mIoU: 54.8 |
| semantic-segmentation-on-pascal-voc-2012 | OCR (ResNet-101) | Mean IoU: 84.3% |
| semantic-segmentation-on-pascal-voc-2012 | OCR (HRNetV2-W48) | Mean IoU: 84.5% |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.