HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models

Chenglin Yang Siyuan Qiao Qihang Yu Xiaoding Yuan Yukun Zhu Alan Yuille Hartwig Adam Liang-Chieh Chen

MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models

Abstract

This paper presents MOAT, a family of neural networks that build on top of MObile convolution (i.e., inverted residual blocks) and ATtention. Unlike the current works that stack separate mobile convolution and transformer blocks, we effectively merge them into a MOAT block. Starting with a standard Transformer block, we replace its multi-layer perceptron with a mobile convolution block, and further reorder it before the self-attention operation. The mobile convolution block not only enhances the network representation capacity, but also produces better downsampled features. Our conceptually simple MOAT networks are surprisingly effective, achieving 89.1% / 81.5% top-1 accuracy on ImageNet-1K / ImageNet-1K-V2 with ImageNet22K pretraining. Additionally, MOAT can be seamlessly applied to downstream tasks that require large resolution inputs by simply converting the global attention to window attention. Thanks to the mobile convolution that effectively exchanges local information between pixels (and thus cross-windows), MOAT does not need the extra window-shifting mechanism. As a result, on COCO object detection, MOAT achieves 59.2% box AP with 227M model parameters (single-scale inference, and hard NMS), and on ADE20K semantic segmentation, MOAT attains 57.6% mIoU with 496M model parameters (single-scale inference). Finally, the tiny-MOAT family, obtained by simply reducing the channel sizes, also surprisingly outperforms several mobile-specific transformer-based models on ImageNet. The tiny-MOAT family is also benchmarked on downstream tasks, serving as a baseline for the community. We hope our simple yet effective MOAT will inspire more seamless integration of convolution and self-attention. Code is publicly available.

Code Repositories

RooKichenn/pytorch-MOAT
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-classification-on-imagenetMOAT-4 22K+1K
GFLOPs: 648.5
Number of params: 483.2M
Top 1 Accuracy: 89.1%
image-classification-on-imagenetMOAT-3 1K only
GFLOPs: 271
Number of params: 190M
Top 1 Accuracy: 86.7%
image-classification-on-imagenetMOAT-0 1K only
GFLOPs: 5.7
Number of params: 27.8M
Top 1 Accuracy: 83.3%
image-classification-on-imagenet-v2MOAT-1 (IN-22K pretraining)
Top 1 Accuracy: 78.4
image-classification-on-imagenet-v2MOAT-2 (IN-22K pretraining)
Top 1 Accuracy: 79.3
image-classification-on-imagenet-v2MOAT-3 (IN-22K pretraining)
Top 1 Accuracy: 80.6
image-classification-on-imagenet-v2MOAT-4 (IN-22K pretraining)
Top 1 Accuracy: 81.5
instance-segmentation-on-coco-minivalMOAT-2 (IN-22K pretraining, single-scale)
mask AP: 49.3
instance-segmentation-on-coco-minivalMOAT-1 (IN-1K pretraining, single-scale)
mask AP: 49.0
instance-segmentation-on-coco-minivaltiny-MOAT-1 (IN-1K pretraining, single-scale)
mask AP: 44.6
instance-segmentation-on-coco-minivalMOAT-0 (IN-1K pretraining, single-scale)
mask AP: 47.4
instance-segmentation-on-coco-minivaltiny-MOAT-0 (IN-1K pretraining, single-scale)
mask AP: 43.3
instance-segmentation-on-coco-minivalMOAT-3 (IN-22K pretraining, single-scale)
mask AP: 50.3
instance-segmentation-on-coco-minivaltiny-MOAT-3 (IN-1K pretraining, single-scale)
mask AP: 47.0
instance-segmentation-on-coco-minivaltiny-MOAT-2 (IN-1K pretraining, single-scale)
mask AP: 45.0
object-detection-on-coco-1MOAT-3 22K+1K
box AP: 59.2
object-detection-on-coco-1MOAT-2
box AP: 58.5
object-detection-on-coco-minivalMOAT-2 (IN-22K pretraining, single-scale)
box AP: 58.5
object-detection-on-coco-minivalMOAT-1 (IN-1K pretraining, single-scale)
box AP: 57.7
object-detection-on-coco-minivalMOAT-3 (IN-22K pretraining, single-scale)
box AP: 59.2
object-detection-on-coco-minivalMOAT-0 (IN-1K pretraining, single-scale)
box AP: 55.9
object-detection-on-coco-minivaltiny-MOAT-3 (IN-1K pretraining, single-scale)
box AP: 55.2
object-detection-on-coco-minivaltiny-MOAT-0 (IN-1K pretraining, single-scale)
box AP: 50.5
object-detection-on-coco-minivaltiny-MOAT-2 (IN-1K pretraining, single-scale)
box AP: 53.0
object-detection-on-coco-minivaltiny-MOAT-1 (IN-1K pretraining, single-scale)
box AP: 51.9
semantic-segmentation-on-ade20ktiny-MOAT-0 (IN-1K pretraining, single scale)
Params (M): 6
Validation mIoU: 41.2
semantic-segmentation-on-ade20kMOAT-3 (IN-22K pretraining, single-scale)
Params (M): 198
Validation mIoU: 56.5
semantic-segmentation-on-ade20kMOAT-2 (IN-22K pretraining, single-scale)
Params (M): 81
Validation mIoU: 54.7
semantic-segmentation-on-ade20ktiny-MOAT-3 (IN-1K pretraining, single scale)
Params (M): 24
Validation mIoU: 47.5
semantic-segmentation-on-ade20ktiny-MOAT-1 (IN-1K pretraining, single scale)
Params (M): 8
Validation mIoU: 43.1
semantic-segmentation-on-ade20kMOAT-4 (IN-22K pretraining, single-scale)
Params (M): 496
Validation mIoU: 57.6
semantic-segmentation-on-ade20ktiny-MOAT-2 (IN-1K pretraining, single scale)
Params (M): 13
Validation mIoU: 44.9

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
MOAT: Alternating Mobile Convolution and Attention Brings Strong Vision Models | Papers | HyperAI