Command Palette
Search for a command to run...
Can Classic GNNs Be Strong Baselines for Graph-level Tasks? Simple Architectures Meet Excellence
Luo Yuankai Shi Lei Wu Xiao-Ming

Abstract
Message-passing Graph Neural Networks (GNNs) are often criticized for theirlimited expressiveness, issues like over-smoothing and over-squashing, andchallenges in capturing long-range dependencies. Conversely, Graph Transformers(GTs) are regarded as superior due to their employment of global attentionmechanisms, which potentially mitigate these challenges. Literature frequentlysuggests that GTs outperform GNNs in graph-level tasks, especially for graphclassification and regression on small molecular graphs. In this study, weexplore the untapped potential of GNNs through an enhanced framework, GNN+,which integrates six widely used techniques: edge feature integration,normalization, dropout, residual connections, feed-forward networks, andpositional encoding, to effectively tackle graph-level tasks. We conduct asystematic re-evaluation of three classic GNNs (GCN, GIN, and GatedGCN)enhanced by the GNN+ framework across 14 well-known graph-level datasets. Ourresults reveal that, contrary to prevailing beliefs, these classic GNNsconsistently match or surpass the performance of GTs, securing top-threerankings across all datasets and achieving first place in eight. Furthermore,they demonstrate greater efficiency, running several times faster than GTs onmany datasets. This highlights the potential of simple GNN architectures,challenging the notion that complex mechanisms in GTs are essential forsuperior graph-level performance. Our source code is available athttps://github.com/LUOyk1999/GNNPlus.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| graph-classification-on-cifar10-100k | GatedGCN+ | Accuracy (%): 77.218 ± 0.381 |
| graph-classification-on-malnet-tiny | GatedGCN+ | Accuracy: 94.600±0.570 |
| graph-classification-on-mnist | GCN+ | Accuracy: 98.382 ± 0.095 |
| graph-classification-on-mnist | GatedGCN+ | Accuracy: 98.712 ± 0.137 |
| graph-classification-on-peptides-func | GCN+ | AP: 0.7261 ± 0.0067 |
| graph-property-prediction-on-ogbg-code2 | GatedGCN+ | Test F1 score: 0.1896 ± 0.0024 Validation F1 score: 0.1742 ± 0.0027 |
| graph-property-prediction-on-ogbg-molhiv | GatedGCN+ | Ext. data: No Number of params: 1076633 Test ROC-AUC: 0.8040 ± 0.0164 Validation ROC-AUC: 0.8329 ± 0.0158 |
| graph-property-prediction-on-ogbg-molpcba | GatedGCN+ | Ext. data: No Number of params: 6016860 Test AP: 0.2981 ± 0.0024 Validation AP: 0.3011 ± 0.0037 |
| graph-property-prediction-on-ogbg-ppa | GatedGCN+ | Ext. data: No Number of params: 5547557 Test Accuracy: 0.8258 ± 0.0055 Validation Accuracy: 0.7815 ± 0.0043 |
| graph-property-prediction-on-ogbg-ppa | GCN+ | Ext. data: No Number of params: 5549605 Test Accuracy: 0.8077 ± 0.0041 Validation Accuracy: 0.7586 ± 0.0032 |
| graph-property-prediction-on-ogbg-ppa | GIN+ | Ext. data: No Number of params: 8173605 Test Accuracy: 0.8107 ± 0.0053 Validation Accuracy: 0.7786 ± 0.0095 |
| graph-regression-on-peptides-struct | GCN+ | MAE: 0.2421 ± 0.0016 |
| graph-regression-on-zinc-500k | GIN+ | MAE: 0.065 |
| node-classification-on-cluster | GatedGCN+ | Accuracy: 79.128 ± 0.235 |
| node-classification-on-coco-sp | GatedGCN+ | macro F1: 0.3802 ± 0.0015 |
| node-classification-on-pascalvoc-sp-1 | GatedGCN+ | macro F1: 0.4263 ± 0.0057 |
| node-classification-on-pattern | GatedGCN+ | Accuracy: 87.029 ± 0.037 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.