HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

OneFormer: One Transformer to Rule Universal Image Segmentation

Jitesh Jain; Jiachen Li; MangTik Chiu; Ali Hassani; Nikita Orlov; Humphrey Shi

OneFormer: One Transformer to Rule Universal Image Segmentation

Abstract

Universal Image Segmentation is not a new concept. Past attempts to unify image segmentation in the last decades include scene parsing, panoptic segmentation, and, more recently, new panoptic architectures. However, such panoptic architectures do not truly unify image segmentation because they need to be trained individually on the semantic, instance, or panoptic segmentation to achieve the best performance. Ideally, a truly universal framework should be trained only once and achieve SOTA performance across all three image segmentation tasks. To that end, we propose OneFormer, a universal image segmentation framework that unifies segmentation with a multi-task train-once design. We first propose a task-conditioned joint training strategy that enables training on ground truths of each domain (semantic, instance, and panoptic segmentation) within a single multi-task training process. Secondly, we introduce a task token to condition our model on the task at hand, making our model task-dynamic to support multi-task training and inference. Thirdly, we propose using a query-text contrastive loss during training to establish better inter-task and inter-class distinctions. Notably, our single OneFormer model outperforms specialized Mask2Former models across all three segmentation tasks on ADE20k, CityScapes, and COCO, despite the latter being trained on each of the three tasks individually with three times the resources. With new ConvNeXt and DiNAT backbones, we observe even more performance improvement. We believe OneFormer is a significant step towards making image segmentation more universal and accessible. To support further research, we open-source our code and models at https://github.com/SHI-Labs/OneFormer

Code Repositories

Benchmarks

BenchmarkMethodologyMetrics
instance-segmentation-on-ade20k-valOneFormer (DiNAT-L, single-scale)
AP: 36.0
instance-segmentation-on-ade20k-valOneFormer (Swin-L, single-scale)
AP: 35.9
instance-segmentation-on-ade20k-valOneFormer (DiNAT-L, single-scale, 1280x1280, COCO-pretrain)
AP: 40.2
APL: 59.7
APM: 44.4
APS: 19.2
instance-segmentation-on-ade20k-valOneFormer (InternImage-H, emb_dim=1024, single-scale, 896x896, COCO-Pretrained)
AP: 44.2
APL: 64.3
APM: 49.9
APS: 23.7
instance-segmentation-on-cityscapes-valOneFormer (ConvNeXt-L, single-scale, Mapillary-Pretrained)
mask AP: 48.7
instance-segmentation-on-cityscapes-valOneFormer (Swin-L, single-scale)
mask AP: 45.6
instance-segmentation-on-cityscapes-valOneFormer (DiNAT-L, single-scale)
mask AP: 45.6
instance-segmentation-on-coco-val-panopticOneFormer (Swin-L, single-scale)
AP: 49.0
instance-segmentation-on-coco-val-panopticOneFormer (InternImage-H, emb_dim=1024, single-scale)
AP: 52.0
instance-segmentation-on-coco-val-panopticOneFormer (DiNAT-L, single-scale)
AP: 49.2
panoptic-segmentation-on-ade20k-valOneFormer (ConvNeXt-L, single-scale, 640x640)
AP: 36.2
PQ: 50.0
mIoU: 56.6
panoptic-segmentation-on-ade20k-valOneFormer (DiNAT-L, single-scale, 640x640)
AP: 36.0
PQ: 50.5
mIoU: 58.3
panoptic-segmentation-on-ade20k-valOneFormer (InternImage-H, emb_dim=256, single-scale, 896x896)
AP: 40.2
PQ: 54.5
mIoU: 60.4
panoptic-segmentation-on-ade20k-valOneFormer (DiNAT-L, single-scale, 1280x1280, COCO-Pretrain)
PQ: 53.4
mIoU: 58.9
panoptic-segmentation-on-ade20k-valOneFormer (ConvNeXt-XL, single-scale, 640x640)
AP: 36.3
PQ: 50.1
mIoU: 57.4
panoptic-segmentation-on-ade20k-valOneFormer (DiNAT-L, single-scale, 1280x1280)
AP: 37.1
PQ: 51.5
mIoU: 58.3
panoptic-segmentation-on-ade20k-valOneFormer (Swin-L, single-scale, 1280x1280)
AP: 37.8
PQ: 51.4
mIoU: 57.0
panoptic-segmentation-on-ade20k-valOneFormer (Swin-L, single-scale, 640x640)
AP: 35.9
PQ: 49.8
mIoU: 57.0
panoptic-segmentation-on-cityscapes-testOneFormer (ConvNeXt-L, single-scale, Mapillary Vistas-Pretrained)
PQ: 68.0
panoptic-segmentation-on-cityscapes-valOneFormer (ConvNeXt-XL, single-scale)
AP: 46.7
PQ: 68.4
mIoU: 83.6
panoptic-segmentation-on-cityscapes-valOneFormer (Swin-L, single-scale)
AP: 45.6
PQ: 67.2
mIoU: 83.0
panoptic-segmentation-on-cityscapes-valOneFormer (DiNAT-L, single-scale)
AP: 45.6
PQ: 67.6
mIoU: 83.1
panoptic-segmentation-on-cityscapes-valOneFormer (ConvNeXt-L, single-scale, 512x1024, Mapillary Vistas-pretrained)
AP: 48.7
PQ: 70.1
PQst: 74.1
PQth: 64.6
mIoU: 84.6
panoptic-segmentation-on-cityscapes-valOneFormer (ConvNeXt-L, single-scale)
AP: 46.5
PQ: 68.51
mIoU: 83.0
panoptic-segmentation-on-coco-minivalOneFormer (InternImage-H,single-scale)
AP: 52.0
PQ: 60.0
PQst: 49.2
PQth: 67.1
mIoU: 68.8
panoptic-segmentation-on-coco-minivalOneFormer (Swin-L, single-scale)
AP: 49.0
PQ: 57.9
PQst: 48.0
PQth: 64.4
mIoU: 67.4
panoptic-segmentation-on-coco-minivalOneFormer (DiNAT-L, single-scale)
AP: 49.2
PQ: 58.0
PQst: 48.4
PQth: 64.3
mIoU: 68.1
panoptic-segmentation-on-mapillary-valOneFormer (DiNAT-L, single-scale)
PQ: 46.7
PQst: 54.9
PQth: 40.5
mIoU: 61.7
panoptic-segmentation-on-mapillary-valOneFormer (ConvNeXt-L, single-scale)
PQ: 46.4
PQst: 54.0
PQth: 40.6
mIoU: 61.6
semantic-segmentation-on-ade20k-valOneFormer (InternImage-H, emb_dim=256, multi-scale, 896x896)
mIoU: 60.8
semantic-segmentation-on-ade20k-valOneFormer (Swin-L, multi-scale, 640x640)
mIoU: 57.7
semantic-segmentation-on-ade20k-valOneFormer (DiNAT-L, multi-scale, 896x896)
mIoU: 58.6
semantic-segmentation-on-ade20k-valOneFormer (Swin-L, multi-scale, 896x896)
mIoU: 58.3
semantic-segmentation-on-ade20k-valOneFormer (DiNAT-L, multi-scale, 640x640)
mIoU: 58.4
semantic-segmentation-on-cityscapes-valOneFormer (ConvNeXt-XL, multi-scale)
mIoU: 84.6
semantic-segmentation-on-cityscapes-valOneFormer (Swin-L, multi-scale)
mIoU: 84.4
semantic-segmentation-on-cityscapes-valOneFormer (ConvNeXt-XL, Mapillary, multi-scale)
mIoU: 85.8
semantic-segmentation-on-coco-1OneFormer (InternImage-H, emb_dim=1024, single-scale)
mIoU: 68.8
semantic-segmentation-on-coco-1OneFormer (Swin-L, single-scale)
mIoU: 67.4
semantic-segmentation-on-coco-1OneFormer (DiNAT-L, single-scale)
mIoU: 68.1
semantic-segmentation-on-mapillary-valOneFormer (DiNAT-L, multi-scale)
mIoU: 64.9

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp