HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Recipe for a General, Powerful, Scalable Graph Transformer

Ladislav Rampášek; Mikhail Galkin; Vijay Prakash Dwivedi; Anh Tuan Luu; Guy Wolf; Dominique Beaini

Recipe for a General, Powerful, Scalable Graph Transformer

Abstract

We propose a recipe on how to build a general, powerful, scalable (GPS) graph Transformer with linear complexity and state-of-the-art results on a diverse set of benchmarks. Graph Transformers (GTs) have gained popularity in the field of graph representation learning with a variety of recent publications but they lack a common foundation about what constitutes a good positional or structural encoding, and what differentiates them. In this paper, we summarize the different types of encodings with a clearer definition and categorize them as being $\textit{local}$, $\textit{global}$ or $\textit{relative}$. The prior GTs are constrained to small graphs with a few hundred nodes, here we propose the first architecture with a complexity linear in the number of nodes and edges $O(N+E)$ by decoupling the local real-edge aggregation from the fully-connected Transformer. We argue that this decoupling does not negatively affect the expressivity, with our architecture being a universal function approximator on graphs. Our GPS recipe consists of choosing 3 main ingredients: (i) positional/structural encoding, (ii) local message-passing mechanism, and (iii) global attention mechanism. We provide a modular framework $\textit{GraphGPS}$ that supports multiple types of encodings and that provides efficiency and scalability both in small and large graphs. We test our architecture on 16 benchmarks and show highly competitive results in all of them, show-casing the empirical benefits gained by the modularity and the combination of different strategies.

Code Repositories

graphcore/ogb-lsc-pcqm4mv2
tf
Mentioned in GitHub
rampasek/GraphGPS
Official
pytorch
Mentioned in GitHub
linusbao/MoSE
pytorch
Mentioned in GitHub
hamed1375/exphormer
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
graph-classification-on-cifar10-100kGPS
Accuracy (%): 72.298
graph-classification-on-enzymesGraphGPS
Accuracy: 78.667±4.625
graph-classification-on-imdb-bGraphGPS
Accuracy: 79.250±3.096
graph-classification-on-malnet-tinyGPS
Accuracy: 93.36 ± 0.6
graph-classification-on-mnistGPS
Accuracy: 98.05
graph-classification-on-nci1GraphGPS
Accuracy: 85.110±1.423
graph-classification-on-nci109GraphGPS
Accuracy: 81.256±0.501
graph-classification-on-peptides-funcGPS
AP: 0.6535±0.0041
graph-classification-on-proteinsGraphGPS
Accuracy: 77.143±1.494
graph-property-prediction-on-ogbg-code2GPS
Ext. data: No
Number of params: 12454066
Test F1 score: 0.1894
Validation F1 score: 0.1739 ± 0.001
graph-property-prediction-on-ogbg-molhivGPS
Ext. data: No
Number of params: 558625
Test ROC-AUC: 0.7880
Validation ROC-AUC: 0.8255 ± 0.0092
graph-property-prediction-on-ogbg-molpcbaGPS
Ext. data: No
Number of params: 9744496
Test AP: 0.2907
Validation AP: 0.3015 ± 0.0038
graph-property-prediction-on-ogbg-ppaGPS
Ext. data: No
Number of params: 3434533
Test Accuracy: 0.8015
Validation Accuracy: 0.7556 ± 0.0027
graph-regression-on-lipophilicityGraphGPS
R2: 0.790±0.004
RMSE: 0.579±0.006
graph-regression-on-pcqm4mv2-lscGPS
Test MAE: 0.0862
Validation MAE: 0.0852
graph-regression-on-peptides-structGPS
MAE: 0.2500±0.0005
graph-regression-on-zincGPS
MAE: 0.070 ± 0.002
graph-regression-on-zincGINE
MAE: 0.070 ± 0.004
graph-regression-on-zinc-500kGPS
MAE: 0.070
graph-regression-on-zinc-fullGraphGPS
Test MAE: 0.024±0.007
link-prediction-on-pcqm-contactGPS
MRR: 0.3337±0.0006
molecular-property-prediction-on-esolGraphGPS
R2: 0.911±0.003
RMSE: 0.613±0.010
molecular-property-prediction-on-freesolvGraphGPS
R2: 0.861±0.037
RMSE: 1.462±0.188
node-classification-on-clusterGPS
Accuracy: 77.95
node-classification-on-coco-spGPS
macro F1: 0.3412±0.0044
node-classification-on-pascalvoc-sp-1GPS
macro F1: 0.3748±0.0109
node-classification-on-patternGPS
Accuracy: 86.685

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Recipe for a General, Powerful, Scalable Graph Transformer | Papers | HyperAI