HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Can Classic GNNs Be Strong Baselines for Graph-level Tasks? Simple Architectures Meet Excellence

Luo Yuankai Shi Lei Wu Xiao-Ming

Can Classic GNNs Be Strong Baselines for Graph-level Tasks? Simple
  Architectures Meet Excellence

Abstract

Message-passing Graph Neural Networks (GNNs) are often criticized for theirlimited expressiveness, issues like over-smoothing and over-squashing, andchallenges in capturing long-range dependencies. Conversely, Graph Transformers(GTs) are regarded as superior due to their employment of global attentionmechanisms, which potentially mitigate these challenges. Literature frequentlysuggests that GTs outperform GNNs in graph-level tasks, especially for graphclassification and regression on small molecular graphs. In this study, weexplore the untapped potential of GNNs through an enhanced framework, GNN+,which integrates six widely used techniques: edge feature integration,normalization, dropout, residual connections, feed-forward networks, andpositional encoding, to effectively tackle graph-level tasks. We conduct asystematic re-evaluation of three classic GNNs (GCN, GIN, and GatedGCN)enhanced by the GNN+ framework across 14 well-known graph-level datasets. Ourresults reveal that, contrary to prevailing beliefs, these classic GNNsconsistently match or surpass the performance of GTs, securing top-threerankings across all datasets and achieving first place in eight. Furthermore,they demonstrate greater efficiency, running several times faster than GTs onmany datasets. This highlights the potential of simple GNN architectures,challenging the notion that complex mechanisms in GTs are essential forsuperior graph-level performance. Our source code is available athttps://github.com/LUOyk1999/GNNPlus.

Code Repositories

LUOyk1999/GNNPlus
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
graph-classification-on-cifar10-100kGatedGCN+
Accuracy (%): 77.218 ± 0.381
graph-classification-on-malnet-tinyGatedGCN+
Accuracy: 94.600±0.570
graph-classification-on-mnistGCN+
Accuracy: 98.382 ± 0.095
graph-classification-on-mnistGatedGCN+
Accuracy: 98.712 ± 0.137
graph-classification-on-peptides-funcGCN+
AP: 0.7261 ± 0.0067
graph-property-prediction-on-ogbg-code2GatedGCN+
Test F1 score: 0.1896 ± 0.0024
Validation F1 score: 0.1742 ± 0.0027
graph-property-prediction-on-ogbg-molhivGatedGCN+
Ext. data: No
Number of params: 1076633
Test ROC-AUC: 0.8040 ± 0.0164
Validation ROC-AUC: 0.8329 ± 0.0158
graph-property-prediction-on-ogbg-molpcbaGatedGCN+
Ext. data: No
Number of params: 6016860
Test AP: 0.2981 ± 0.0024
Validation AP: 0.3011 ± 0.0037
graph-property-prediction-on-ogbg-ppaGatedGCN+
Ext. data: No
Number of params: 5547557
Test Accuracy: 0.8258 ± 0.0055
Validation Accuracy: 0.7815 ± 0.0043
graph-property-prediction-on-ogbg-ppaGCN+
Ext. data: No
Number of params: 5549605
Test Accuracy: 0.8077 ± 0.0041
Validation Accuracy: 0.7586 ± 0.0032
graph-property-prediction-on-ogbg-ppaGIN+
Ext. data: No
Number of params: 8173605
Test Accuracy: 0.8107 ± 0.0053
Validation Accuracy: 0.7786 ± 0.0095
graph-regression-on-peptides-structGCN+
MAE: 0.2421 ± 0.0016
graph-regression-on-zinc-500kGIN+
MAE: 0.065
node-classification-on-clusterGatedGCN+
Accuracy: 79.128 ± 0.235
node-classification-on-coco-spGatedGCN+
macro F1: 0.3802 ± 0.0015
node-classification-on-pascalvoc-sp-1GatedGCN+
macro F1: 0.4263 ± 0.0057
node-classification-on-patternGatedGCN+
Accuracy: 87.029 ± 0.037

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Can Classic GNNs Be Strong Baselines for Graph-level Tasks? Simple Architectures Meet Excellence | Papers | HyperAI