Command Palette
Search for a command to run...
RTFormer: Efficient Design for Real-Time Semantic Segmentation with Transformer
Jian Wang; Chenhui Gou; Qiman Wu; Haocheng Feng; Junyu Han; Errui Ding; Jingdong Wang

Abstract
Recently, transformer-based networks have shown impressive results in semantic segmentation. Yet for real-time semantic segmentation, pure CNN-based approaches still dominate in this field, due to the time-consuming computation mechanism of transformer. We propose RTFormer, an efficient dual-resolution transformer for real-time semantic segmenation, which achieves better trade-off between performance and efficiency than CNN-based models. To achieve high inference efficiency on GPU-like devices, our RTFormer leverages GPU-Friendly Attention with linear complexity and discards the multi-head mechanism. Besides, we find that cross-resolution attention is more efficient to gather global context information for high-resolution branch by spreading the high level knowledge learned from low-resolution branch. Extensive experiments on mainstream benchmarks demonstrate the effectiveness of our proposed RTFormer, it achieves state-of-the-art on Cityscapes, CamVid and COCOStuff, and shows promising results on ADE20K. Code is available at PaddleSeg: https://github.com/PaddlePaddle/PaddleSeg.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| real-time-semantic-segmentation-on-camvid | RTFormer-Slim | Frame (fps): 190.7(2080Ti) mIoU: 81.4 |
| real-time-semantic-segmentation-on-cityscapes-1 | RTFormer-S | Frame (fps): 89.6 mIoU: 76.3% |
| real-time-semantic-segmentation-on-cityscapes-1 | RTFormer-B | Frame (fps): 50.2 mIoU: 79.3% |
| semantic-segmentation-on-camvid | RTFormer-Base | Mean IoU: 82.5 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.