3 个月前

用于密集预测的视觉Transformer适配器

用于密集预测的视觉Transformer适配器

摘要

本研究提出了一种简单而高效的视觉Transformer(Vision Transformer, ViT)密集预测任务适配器——ViT-Adapter。与近年来引入视觉领域特定归纳偏置(inductive biases)的先进变体不同,原始ViT由于先验假设较弱,在密集预测任务上表现不佳。为解决这一问题,我们提出ViT-Adapter,使原始ViT能够达到与专为视觉任务设计的Transformer相当的性能水平。在本框架中,主干网络采用原始ViT,能够从大规模多模态数据中学习强大的表示能力。在迁移到下游任务时,我们引入一种无需预训练的适配器(pre-training-free adapter),将图像相关的归纳偏置注入模型,从而使其更适用于各类密集预测任务。我们在多个密集预测任务上验证了ViT-Adapter的有效性,包括目标检测、实例分割和语义分割。值得注意的是,在不使用额外检测数据的情况下,我们的ViT-Adapter-L在COCO test-dev基准上取得了60.9的box AP和53.0的mask AP,达到当前最优(state-of-the-art)水平。我们期望ViT-Adapter能成为视觉专用Transformer的一种有效替代方案,并推动后续相关研究的发展。代码与模型将开源发布于:https://github.com/czczup/ViT-Adapter。

代码仓库

czczup/vit-adapter
官方
pytorch
GitHub 中提及
chenller/mmseg-extension
pytorch
GitHub 中提及

基准测试

基准方法指标
instance-segmentation-on-cocoViT-Adapter-L (HTC++, BEiTv2 pretrain, multi-scale)
mask AP: 53.0
instance-segmentation-on-cocoViT-Adapter-L (HTC++, BEiT pretrain, multi-scale)
mask AP: 52.5
instance-segmentation-on-cocoViT-Adapter-L (HTC++, BEiTv2, O365, multi-scale)
mask AP: 54.5
instance-segmentation-on-coco-minivalViT-Adapter-L (HTC++, BEiT pretrain, multi-scale)
mask AP: 52.2
instance-segmentation-on-coco-minivalViT-Adapter-L (HTC++, BEiTv2, O365, multi-scale)
mask AP: 54.2
instance-segmentation-on-coco-minivalViT-Adapter-L (HTC++, BEiTv2 pretrain, multi-scale)
mask AP: 52.5
object-detection-on-cocoViT-Adapter-L (HTC++, BEiT pretrain, multi-scale)
box mAP: 60.4
object-detection-on-cocoViT-Adapter-L (HTC++, BEiTv2 pretrain, multi-scale)
box mAP: 60.9
object-detection-on-coco-minivalViT-Adapter-L (HTC++, BEiT pretrain, multi-scale)
box AP: 60.2
object-detection-on-coco-minivalViT-Adapter-L (HTC++, BEiTv2 pretrain, multi-scale)
box AP: 60.5
object-detection-on-coco-oViT-Adapter (BEiTv2-L)
Average mAP: 34.25
Effective Robustness: 7.79
panoptic-segmentation-on-coco-minivalViT-Adapter-L (single-scale, BEiTv2 pretrain, Mask2Former)
AP: 48.9
PQ: 58.4
PQst: 48.4
PQth: 65.0
semantic-segmentation-on-ade20kViT-Adapter-L (UperNet, BEiT pretrain)
Params (M): 451
Validation mIoU: 58.4
semantic-segmentation-on-ade20kViT-Adapter-L (Mask2Former, BEiT pretrain)
Params (M): 571
Validation mIoU: 60.5
semantic-segmentation-on-ade20kViT-Adapter-L (Mask2Former, BEiTv2 pretrain)
Params (M): 571
Validation mIoU: 61.5
semantic-segmentation-on-ade20k-valViT-Adapter-L (UperNet, BEiT pretrain)
mIoU: 58.4
semantic-segmentation-on-ade20k-valViT-Adapter-L (Mask2Former, BEiT pretrain)
mIoU: 60.5
semantic-segmentation-on-cityscapesViT-Adapter-L (Mask2Former, BEiT pretrain)
Mean IoU (class): 85.2%
semantic-segmentation-on-cityscapes-valViT-Adapter-L
mIoU: 85.8
semantic-segmentation-on-pascal-contextViT-Adapter-L (Mask2Former, BEiT pretrain)
mIoU: 68.2
semantic-segmentation-on-pascal-contextViT-Adapter-L (UperNet, BEiT pretrain)
mIoU: 67.5

用 AI 构建 AI

从想法到上线——通过免费 AI 协同编程、开箱即用的环境和市场最优价格的 GPU 加速您的 AI 开发

AI 协同编程
即用型 GPU
最优价格
立即开始

Hyper Newsletters

订阅我们的最新资讯
我们会在北京时间 每周一的上午九点 向您的邮箱投递本周内的最新更新
邮件发送服务由 MailChimp 提供