
摘要
本研究提出了一种简单而高效的视觉Transformer(Vision Transformer, ViT)密集预测任务适配器——ViT-Adapter。与近年来引入视觉领域特定归纳偏置(inductive biases)的先进变体不同,原始ViT由于先验假设较弱,在密集预测任务上表现不佳。为解决这一问题,我们提出ViT-Adapter,使原始ViT能够达到与专为视觉任务设计的Transformer相当的性能水平。在本框架中,主干网络采用原始ViT,能够从大规模多模态数据中学习强大的表示能力。在迁移到下游任务时,我们引入一种无需预训练的适配器(pre-training-free adapter),将图像相关的归纳偏置注入模型,从而使其更适用于各类密集预测任务。我们在多个密集预测任务上验证了ViT-Adapter的有效性,包括目标检测、实例分割和语义分割。值得注意的是,在不使用额外检测数据的情况下,我们的ViT-Adapter-L在COCO test-dev基准上取得了60.9的box AP和53.0的mask AP,达到当前最优(state-of-the-art)水平。我们期望ViT-Adapter能成为视觉专用Transformer的一种有效替代方案,并推动后续相关研究的发展。代码与模型将开源发布于:https://github.com/czczup/ViT-Adapter。
代码仓库
czczup/vit-adapter
官方
pytorch
GitHub 中提及
chenller/mmseg-extension
pytorch
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| instance-segmentation-on-coco | ViT-Adapter-L (HTC++, BEiTv2 pretrain, multi-scale) | mask AP: 53.0 |
| instance-segmentation-on-coco | ViT-Adapter-L (HTC++, BEiT pretrain, multi-scale) | mask AP: 52.5 |
| instance-segmentation-on-coco | ViT-Adapter-L (HTC++, BEiTv2, O365, multi-scale) | mask AP: 54.5 |
| instance-segmentation-on-coco-minival | ViT-Adapter-L (HTC++, BEiT pretrain, multi-scale) | mask AP: 52.2 |
| instance-segmentation-on-coco-minival | ViT-Adapter-L (HTC++, BEiTv2, O365, multi-scale) | mask AP: 54.2 |
| instance-segmentation-on-coco-minival | ViT-Adapter-L (HTC++, BEiTv2 pretrain, multi-scale) | mask AP: 52.5 |
| object-detection-on-coco | ViT-Adapter-L (HTC++, BEiT pretrain, multi-scale) | box mAP: 60.4 |
| object-detection-on-coco | ViT-Adapter-L (HTC++, BEiTv2 pretrain, multi-scale) | box mAP: 60.9 |
| object-detection-on-coco-minival | ViT-Adapter-L (HTC++, BEiT pretrain, multi-scale) | box AP: 60.2 |
| object-detection-on-coco-minival | ViT-Adapter-L (HTC++, BEiTv2 pretrain, multi-scale) | box AP: 60.5 |
| object-detection-on-coco-o | ViT-Adapter (BEiTv2-L) | Average mAP: 34.25 Effective Robustness: 7.79 |
| panoptic-segmentation-on-coco-minival | ViT-Adapter-L (single-scale, BEiTv2 pretrain, Mask2Former) | AP: 48.9 PQ: 58.4 PQst: 48.4 PQth: 65.0 |
| semantic-segmentation-on-ade20k | ViT-Adapter-L (UperNet, BEiT pretrain) | Params (M): 451 Validation mIoU: 58.4 |
| semantic-segmentation-on-ade20k | ViT-Adapter-L (Mask2Former, BEiT pretrain) | Params (M): 571 Validation mIoU: 60.5 |
| semantic-segmentation-on-ade20k | ViT-Adapter-L (Mask2Former, BEiTv2 pretrain) | Params (M): 571 Validation mIoU: 61.5 |
| semantic-segmentation-on-ade20k-val | ViT-Adapter-L (UperNet, BEiT pretrain) | mIoU: 58.4 |
| semantic-segmentation-on-ade20k-val | ViT-Adapter-L (Mask2Former, BEiT pretrain) | mIoU: 60.5 |
| semantic-segmentation-on-cityscapes | ViT-Adapter-L (Mask2Former, BEiT pretrain) | Mean IoU (class): 85.2% |
| semantic-segmentation-on-cityscapes-val | ViT-Adapter-L | mIoU: 85.8 |
| semantic-segmentation-on-pascal-context | ViT-Adapter-L (Mask2Former, BEiT pretrain) | mIoU: 68.2 |
| semantic-segmentation-on-pascal-context | ViT-Adapter-L (UperNet, BEiT pretrain) | mIoU: 67.5 |