
摘要
近年来,Transformer在计算机视觉领域引起了广泛关注。然而,自注意力机制在图像尺寸增大时缺乏可扩展性,限制了其在当前主流视觉主干网络中的广泛应用。本文提出了一种高效且可扩展的注意力模型——多轴注意力(multi-axis attention),该模型包含两个核心组成部分:分块局部注意力与扩张全局注意力。这一设计使得模型能够在任意输入分辨率下实现全局与局部的空间交互,同时仅保持线性计算复杂度。此外,我们通过将所提出的注意力机制与卷积操作有效结合,引入了一种新型网络结构单元,并基于此构建了一个简单而层次化的视觉主干网络——MaxViT,该网络通过在多个阶段重复使用基本模块即可实现。值得注意的是,MaxViT在整个网络中,即使在早期的高分辨率阶段,也具备“全局视野”能力。我们在广泛的视觉任务上验证了该模型的有效性:在图像分类任务中,MaxViT在多种设置下均达到当前最优性能——在不依赖额外数据的情况下,ImageNet-1K数据集上的top-1准确率高达86.5%;在使用ImageNet-21K预训练后,准确率进一步提升至88.7%。在下游任务方面,MaxViT作为主干网络在目标检测与视觉审美评估任务中也展现出优异性能。此外,我们还证明了所提出的模型在ImageNet数据集上具备强大的生成建模能力,充分展现了MaxViT模块作为通用视觉单元的巨大潜力。相关源代码与训练好的模型将公开发布于:https://github.com/google-research/maxvit。
代码仓库
google-research/maxvit
官方
tf
GitHub 中提及
hankyul2/maxvit-pytorch
pytorch
GitHub 中提及
RooKichenn/pytorch-MaxViT
pytorch
GitHub 中提及
lucidrains/vit-pytorch
pytorch
GitHub 中提及
qwopqwop200/MaxVIT-pytorch
pytorch
GitHub 中提及
Mind23-2/MindCode-3/tree/main/NFNet
mindspore
google-research/maxim
jax
GitHub 中提及
ChristophReich1996/MaxViT
pytorch
GitHub 中提及
lucidrains/imagen-pytorch
pytorch
GitHub 中提及
towhee-io/towhee
pytorch
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| image-classification-on-imagenet | MaxViT-B (224res) | GFLOPs: 23.4 Number of params: 120M Top 1 Accuracy: 84.94% |
| image-classification-on-imagenet | MaxViT-L (384res, 21K) | Top 1 Accuracy: 88.32% |
| image-classification-on-imagenet | MaxViT-S (224res) | GFLOPs: 11.7 Number of params: 69M Top 1 Accuracy: 84.45% |
| image-classification-on-imagenet | MaxViT-B (512res) | Top 1 Accuracy: 86.7% |
| image-classification-on-imagenet | MaxViT-XL (512res, JFT) | Top 1 Accuracy: 89.53% |
| image-classification-on-imagenet | MaxViT-L (224res) | GFLOPs: 43.9 Number of params: 212M Top 1 Accuracy: 85.17% |
| image-classification-on-imagenet | MaxViT-L (384res) | Top 1 Accuracy: 86.4% |
| image-classification-on-imagenet | MaxViT-B (384res, JFT) | Top 1 Accuracy: 88.69% |
| image-classification-on-imagenet | MaxViT-L (512res, 21K) | Top 1 Accuracy: 88.46% |
| image-classification-on-imagenet | MaxViT-XL (512res, 21K) | Top 1 Accuracy: 88.7% |
| image-classification-on-imagenet | MaxViT-B (384res) | Top 1 Accuracy: 86.34% |
| image-classification-on-imagenet | MaxViT-L (512res, JFT) | Top 1 Accuracy: 89.41% |
| image-classification-on-imagenet | MaxViT-T (224res) | GFLOPs: 5.6 Number of params: 31M Top 1 Accuracy: 83.62% |
| image-classification-on-imagenet | MaxViT-L (384res, JFT) | Top 1 Accuracy: 89.12% |
| image-classification-on-imagenet | MaxViT-T (384res) | Top 1 Accuracy: 85.72% |
| image-classification-on-imagenet | MaxViT-XL (384res, 21K) | Top 1 Accuracy: 88.51% |
| image-classification-on-imagenet | MaxViT-S (512res) | Top 1 Accuracy: 86.19% |
| image-classification-on-imagenet | MaxViT-B (512res, JFT) | Top 1 Accuracy: 88.82% |
| image-classification-on-imagenet | MaxViT-B (512res, 21K) | Top 1 Accuracy: 88.38% |
| image-classification-on-imagenet | MaxViT-XL (384res, JFT) | Top 1 Accuracy: 89.36% |
| object-detection-on-coco-2017 | MaxViT-T | AP: 52.1 AP50: 71.9 AP75: 56.8 APM: 44.6 APM50: 69.1 APM75: 48.4 |
| object-detection-on-coco-2017 | MaxViT-S | AP: 53.1 AP50: 72.5 AP75: 58.1 APM: 45.4 APM50: 69.8 APM75: 49.5 |
| object-detection-on-coco-2017 | MaxViT-B | AP: 53.4 AP50: 72.9 AP75: 58.1 APM: 45.7 APM50: 70.3 APM75: 50 |