HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Dilated Neighborhood Attention Transformer

Ali Hassani Humphrey Shi

Dilated Neighborhood Attention Transformer

Abstract

Transformers are quickly becoming one of the most heavily applied deep learning architectures across modalities, domains, and tasks. In vision, on top of ongoing efforts into plain transformers, hierarchical transformers have also gained significant attention, thanks to their performance and easy integration into existing frameworks. These models typically employ localized attention mechanisms, such as the sliding-window Neighborhood Attention (NA) or Swin Transformer's Shifted Window Self Attention. While effective at reducing self attention's quadratic complexity, local attention weakens two of the most desirable properties of self attention: long range inter-dependency modeling, and global receptive field. In this paper, we introduce Dilated Neighborhood Attention (DiNA), a natural, flexible and efficient extension to NA that can capture more global context and expand receptive fields exponentially at no additional cost. NA's local attention and DiNA's sparse global attention complement each other, and therefore we introduce Dilated Neighborhood Attention Transformer (DiNAT), a new hierarchical vision transformer built upon both. DiNAT variants enjoy significant improvements over strong baselines such as NAT, Swin, and ConvNeXt. Our large model is faster and ahead of its Swin counterpart by 1.6% box AP in COCO object detection, 1.4% mask AP in COCO instance segmentation, and 1.4% mIoU in ADE20K semantic segmentation. Paired with new frameworks, our large variant is the new state of the art panoptic segmentation model on COCO (58.5 PQ) and ADE20K (49.4 PQ), and instance segmentation model on Cityscapes (45.1 AP) and ADE20K (35.4 AP) (no extra data). It also matches the state of the art specialized semantic segmentation models on ADE20K (58.1 mIoU), and ranks second on Cityscapes (84.5 mIoU) (no extra data).

Benchmarks

BenchmarkMethodologyMetrics
image-classification-on-imagenetDiNAT-Base
GFLOPs: 13.7
Number of params: 90M
Top 1 Accuracy: 84.4%
image-classification-on-imagenetDiNAT_s-Large (224x224; Pretrained on ImageNet-22K @ 224x224)
GFLOPs: 34.5
Top 1 Accuracy: 86.5%
image-classification-on-imagenetDiNAT-Mini
GFLOPs: 2.7
Number of params: 20M
Top 1 Accuracy: 81.8%
image-classification-on-imagenetDiNAT-Small
GFLOPs: 7.8
Number of params: 51M
Top 1 Accuracy: 83.8%
image-classification-on-imagenetDiNAT-Large (384x384; Pretrained on ImageNet-22K @ 224x224)
GFLOPs: 89.7
Top 1 Accuracy: 87.4%
image-classification-on-imagenetDiNAT-Large (11x11ks; 384res; Pretrained on IN22K@224)
GFLOPs: 92.4
Number of params: 200M
Top 1 Accuracy: 87.5%
image-classification-on-imagenetDiNAT_s-Large (384res; Pretrained on IN22K@224)
GFLOPs: 101.5
Number of params: 197M
Top 1 Accuracy: 87.4%
image-classification-on-imagenetDiNAT-Tiny
GFLOPs: 4.3
Number of params: 28M
Top 1 Accuracy: 82.7%
instance-segmentation-on-ade20k-valDiNAT-L (Mask2Former, single-scale)
AP: 35.4
APL: 55.5
APM: 39.0
APS: 16.3
instance-segmentation-on-cityscapes-valDiNAT-L (single-scale, Mask2Former)
AP50: 72.6
mask AP: 45.1
instance-segmentation-on-coco-minivalDiNAT-L (single-scale, Mask2Former)
AP50: 75.0
mask AP: 50.8
panoptic-segmentation-on-ade20k-valDiNAT-L (Mask2Former, 640x640)
AP: 35.0
PQ: 49.4
mIoU: 56.3
panoptic-segmentation-on-cityscapes-valDiNAT-L (Mask2Former)
AP: 44.5
PQ: 67.2
mIoU: 83.4
panoptic-segmentation-on-coco-minivalDiNAT-L (single-scale, Mask2Former)
AP: 49.2
PQ: 58.5
PQst: 48.8
PQth: 64.9
mIoU: 68.3
semantic-segmentation-on-ade20kDiNAT-Base (UperNet)
Validation mIoU: 50.4
semantic-segmentation-on-ade20kDiNAT-Tiny (UperNet)
Validation mIoU: 48.8
semantic-segmentation-on-ade20kDiNAT_s-Large (UperNet)
Validation mIoU: 54.6
semantic-segmentation-on-ade20kDiNAT-Small (UperNet)
Validation mIoU: 49.9
semantic-segmentation-on-ade20kDiNAT-Large (UperNet)
Validation mIoU: 54.9
semantic-segmentation-on-ade20kDiNAT-L (Mask2Former)
Validation mIoU: 58.1
semantic-segmentation-on-ade20kDiNAT-Mini (UperNet)
Validation mIoU: 47.2
semantic-segmentation-on-ade20k-valDiNAT-L (Mask2Former)
mIoU: 58.1
semantic-segmentation-on-cityscapes-valDiNAT-L (Mask2Former)
mIoU: 84.5

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp