Command Palette
Search for a command to run...
Qihang Yu Huiyu Wang Dahun Kim Siyuan Qiao Maxwell Collins Yukun Zhu Hartwig Adam Alan Yuille Liang-Chieh Chen

Abstract
We propose Clustering Mask Transformer (CMT-DeepLab), a transformer-based framework for panoptic segmentation designed around clustering. It rethinks the existing transformer architectures used in segmentation and detection; CMT-DeepLab considers the object queries as cluster centers, which fill the role of grouping the pixels when applied to segmentation. The clustering is computed with an alternating procedure, by first assigning pixels to the clusters by their feature affinity, and then updating the cluster centers and pixel features. Together, these operations comprise the Clustering Mask Transformer (CMT) layer, which produces cross-attention that is denser and more consistent with the final segmentation task. CMT-DeepLab improves the performance over prior art significantly by 4.4% PQ, achieving a new state-of-the-art of 55.7% PQ on the COCO test-dev set.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| panoptic-segmentation-on-cityscapes-val | CMT-DeepLab (MaX-S, single-scale, IN-1K) | PQ: 64.6 mIoU: 81.4 |
| panoptic-segmentation-on-coco-minival | CMT-DeepLab (single-scale) | PQ: 55.3 PQst: 46.6 PQth: 61.0 |
| panoptic-segmentation-on-coco-test-dev | CMT-DeepLab (single-scale) | PQ: 55.7 PQst: 46.8 PQth: 61.6 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.