HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Is Attention Better Than Matrix Decomposition?

Zhengyang Geng Meng-Hao Guo Hongxu Chen Xia Li Ke Wei Zhouchen Lin

Is Attention Better Than Matrix Decomposition?

Abstract

As an essential ingredient of modern deep learning, attention mechanism, especially self-attention, plays a vital role in the global correlation discovery. However, is hand-crafted attention irreplaceable when modeling the global context? Our intriguing finding is that self-attention is not better than the matrix decomposition (MD) model developed 20 years ago regarding the performance and computational cost for encoding the long-distance dependencies. We model the global context issue as a low-rank recovery problem and show that its optimization algorithms can help design global information blocks. This paper then proposes a series of Hamburgers, in which we employ the optimization algorithms for solving MDs to factorize the input representations into sub-matrices and reconstruct a low-rank embedding. Hamburgers with different MDs can perform favorably against the popular global context module self-attention when carefully coping with gradients back-propagated through MDs. Comprehensive experiments are conducted in the vision tasks where it is crucial to learn the global context, including semantic segmentation and image generation, demonstrating significant improvements over self-attention and its variants.

Code Repositories

plumprc/MTS-Mixers
pytorch
Mentioned in GitHub
toqitahamid/gasformer
pytorch
Mentioned in GitHub
Gsunshine/Enjoy-Hamburger
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
conditional-image-generation-on-imagenetHamGAN
FID: 14.80
Inception score: 58.75
semantic-segmentation-on-ade20kLight-Ham (VAN-Base)
GFLOPs (512 x 512): 34.4
Params (M): 27.4
Validation mIoU: 49.6
semantic-segmentation-on-ade20kLight-Ham (VAN-Small, D=256)
GFLOPs (512 x 512): 15.8
Params (M): 13.8
Validation mIoU: 45.2
semantic-segmentation-on-ade20kLight-Ham (VAN-Huge)
GFLOPs (512 x 512): 71.8
Params (M): 61.1
Validation mIoU: 51.5
semantic-segmentation-on-ade20kLight-Ham (VAN-Large)
GFLOPs (512 x 512): 55.0
Params (M): 45.6
Validation mIoU: 51.0
semantic-segmentation-on-ade20kHamNet (ResNet-101)
Validation mIoU: 46.8
semantic-segmentation-on-ade20k-valLight-Ham (VAN-Large, 46M, IN-1k, MS)
mIoU: 51.0
semantic-segmentation-on-ade20k-valLight-Ham (VAN-Base, 27M, IN-1k, MS)
mIoU: 49.6
semantic-segmentation-on-ade20k-valLight-Ham (VAN-Huge, 61M, IN-1k, MS)
mIoU: 51.5
semantic-segmentation-on-pascal-contextHamNet (ResNet-101)
mIoU: 55.2
semantic-segmentation-on-pascal-voc-2012HamNet w/o COCO (ResNet-101)
Mean IoU: 85.9%

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp