HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Global Attention Improves Graph Networks Generalization

Omri Puny Heli Ben-Hamu Yaron Lipman

Global Attention Improves Graph Networks Generalization

Abstract

This paper advocates incorporating a Low-Rank Global Attention (LRGA) module, a computation and memory efficient variant of the dot-product attention (Vaswani et al., 2017), to Graph Neural Networks (GNNs) for improving their generalization power. To theoretically quantify the generalization properties granted by adding the LRGA module to GNNs, we focus on a specific family of expressive GNNs and show that augmenting it with LRGA provides algorithmic alignment to a powerful graph isomorphism test, namely the 2-Folklore Weisfeiler-Lehman (2-FWL) algorithm. In more detail we: (i) consider the recent Random Graph Neural Network (RGNN) (Sato et al., 2020) framework and prove that it is universal in probability; (ii) show that RGNN augmented with LRGA aligns with 2-FWL update step via polynomial kernels; and (iii) bound the sample complexity of the kernel's feature map when learned with a randomly initialized two-layer MLP. From a practical point of view, augmenting existing GNN layers with LRGA produces state of the art results in current GNN benchmarks. Lastly, we observe that augmenting various GNN architectures with LRGA often closes the performance gap between different models.

Code Repositories

chuanqichen/cs224w
pytorch
Mentioned in GitHub
omri1348/LRGA
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
link-property-prediction-on-ogbl-collabPLNLP+ LRGA
Ext. data: No
Number of params: 35200656
Test Hits@50: 0.6909 ± 0.0055
Validation Hits@50: 1.0000 ± 0.0000
link-property-prediction-on-ogbl-collabLRGA + GCN
Ext. data: No
Number of params: 1069489
Test Hits@50: 0.5221 ± 0.0072
Validation Hits@50: 0.6088 ± 0.0059
link-property-prediction-on-ogbl-ddiLRGA + GCN
Ext. data: No
Number of params: 1576081
Test Hits@20: 0.6230 ± 0.0912
Validation Hits@20: 0.6675 ± 0.0058

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Global Attention Improves Graph Networks Generalization | Papers | HyperAI