HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

kMaX-DeepLab: k-means Mask Transformer

Qihang Yu; Huiyu Wang; Siyuan Qiao; Maxwell Collins; Yukun Zhu; Hartwig Adam; Alan Yuille; Liang-Chieh Chen

kMaX-DeepLab: k-means Mask Transformer

Abstract

The rise of transformers in vision tasks not only advances network backbone designs, but also starts a brand-new page to achieve end-to-end image recognition (e.g., object detection and panoptic segmentation). Originated from Natural Language Processing (NLP), transformer architectures, consisting of self-attention and cross-attention, effectively learn long-range interactions between elements in a sequence. However, we observe that most existing transformer-based vision models simply borrow the idea from NLP, neglecting the crucial difference between languages and images, particularly the extremely large sequence length of spatially flattened pixel features. This subsequently impedes the learning in cross-attention between pixel features and object queries. In this paper, we rethink the relationship between pixels and object queries and propose to reformulate the cross-attention learning as a clustering process. Inspired by the traditional k-means clustering algorithm, we develop a k-means Mask Xformer (kMaX-DeepLab) for segmentation tasks, which not only improves the state-of-the-art, but also enjoys a simple and elegant design. As a result, our kMaX-DeepLab achieves a new state-of-the-art performance on COCO val set with 58.0% PQ, Cityscapes val set with 68.4% PQ, 44.0% AP, and 83.5% mIoU, and ADE20K val set with 50.9% PQ and 55.2% mIoU without test-time augmentation or external dataset. We hope our work can shed some light on designing transformers tailored for vision tasks. TensorFlow code and models are available at https://github.com/google-research/deeplab2 A PyTorch re-implementation is also available at https://github.com/bytedance/kmax-deeplab

Code Repositories

cy-xu/spatially_aware_ai
pytorch
Mentioned in GitHub
bytedance/kmax-deeplab
Official
pytorch
Mentioned in GitHub
google-research/deeplab2
Official
tf
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
panoptic-segmentation-on-ade20k-valkMaX-DeepLab (ResNet50, single-scale, 1281x1281)
AP: -
PQ: 42.3
mIoU: 45.3
panoptic-segmentation-on-ade20k-valkMaX-DeepLab (ConvNeXt-L, single-scale, 1281x1281)
AP: -
PQ: 50.9
mIoU: 55.2
panoptic-segmentation-on-ade20k-valkMaX-DeepLab (ConvNeXt-L, single-scale, 641x641)
AP: -
PQ: 48.7
mIoU: 54.8
panoptic-segmentation-on-ade20k-valkMaX-DeepLab (ResNet50, single-scale, 641x641)
AP: -
PQ: 41.5
mIoU: 45.0
panoptic-segmentation-on-cityscapes-testkMaX-DeepLab (single-scale)
PQ: 66.2
panoptic-segmentation-on-cityscapes-valkMaX-DeepLab (single-scale)
AP: 44.0
PQ: 68.4
mIoU: 83.5
panoptic-segmentation-on-coco-minivalkMaX-DeepLab (single-scale, drop query with 256 queries)
PQ: 58.0
PQst: 48.6
PQth: 64.2
panoptic-segmentation-on-coco-minivalkMaX-DeepLab (single-scale, pseudo-labels)
PQ: 58.1
PQst: 48.8
PQth: 64.3
panoptic-segmentation-on-coco-minivalkMaX-DeepLab (single-scale)
PQ: 57.9
PQst: 48.6
PQth: 64.0
panoptic-segmentation-on-coco-test-devkMaX-DeepLab (single-scale)
PQ: 58.5
PQst: 49.0
PQth: 64.8
semantic-segmentation-on-cityscapeskMaX-DeepLab (ConvNeXt-L, fine only)
Mean IoU (class): 83.2%

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
kMaX-DeepLab: k-means Mask Transformer | Papers | HyperAI