Command Palette
Search for a command to run...
Jan Tönshoff; Martin Ritzert; Eran Rosenbluth; Martin Grohe

Abstract
The recent Long-Range Graph Benchmark (LRGB, Dwivedi et al. 2022) introduced a set of graph learning tasks strongly dependent on long-range interaction between vertices. Empirical evidence suggests that on these tasks Graph Transformers significantly outperform Message Passing GNNs (MPGNNs). In this paper, we carefully reevaluate multiple MPGNN baselines as well as the Graph Transformer GPS (Rampášek et al. 2022) on LRGB. Through a rigorous empirical analysis, we demonstrate that the reported performance gap is overestimated due to suboptimal hyperparameter choices. It is noteworthy that across multiple datasets the performance gap completely vanishes after basic hyperparameter optimization. In addition, we discuss the impact of lacking feature normalization for LRGB's vision datasets and highlight a spurious implementation of LRGB's link prediction metric. The principal aim of our paper is to establish a higher standard of empirical rigor within the graph machine learning community.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| graph-classification-on-peptides-func | GatedGCN-tuned | AP: 0.6765±0.0047 |
| graph-classification-on-peptides-func | GCN-tuned | AP: 0.6860±0.0050 |
| graph-classification-on-peptides-func | GINE-tuned | AP: 0.6621±0.0067 |
| graph-classification-on-peptides-func | GPS-tuned | AP: 0.6534±0.0091 |
| graph-regression-on-peptides-struct | GatedGCN-tuned | MAE: 0.2477±0.0009 |
| graph-regression-on-peptides-struct | GCN-tuned | MAE: 0.2460±0.0007 |
| graph-regression-on-peptides-struct | GINE-tuned | MAE: 0.2473±0.0017 |
| graph-regression-on-peptides-struct | GPS-tuned | MAE: 0.2509±0.0014 |
| link-prediction-on-pcqm-contact | GPS-tuned | MRR: 0.3498±0.0005 MRR-ext-filtered: 0.4703±0.0014 |
| link-prediction-on-pcqm-contact | GINE-tuned | MRR: 0.3509±0.0006 MRR-ext-filtered: 0.4617±0.0005 |
| link-prediction-on-pcqm-contact | GatedGCN-tuned | MRR: 0.3495±0.0010 MRR-ext-filtered: 0.4670±0.0004 |
| link-prediction-on-pcqm-contact | GCN-tuned | MRR: 0.3424±0.0007 MRR-ext-filtered: 0.4526±0.0006 |
| node-classification-on-coco-sp | GatedGCN-tuned | macro F1: 0.2922±0.0018 |
| node-classification-on-coco-sp | GINE-tuned | macro F1: 0.2125±0.0009 |
| node-classification-on-coco-sp | GPS-tuned | macro F1: 0.3884±0.0055 |
| node-classification-on-coco-sp | GCN-tuned | macro F1: 0.1338±0.0007 |
| node-classification-on-pascalvoc-sp-1 | GINE-tuned | macro F1: 0.2718±0.0054 |
| node-classification-on-pascalvoc-sp-1 | GCN-tuned | macro F1: 0.2078±0.0031 |
| node-classification-on-pascalvoc-sp-1 | GPS-tuned | macro F1: 0.4440±0.0065 |
| node-classification-on-pascalvoc-sp-1 | GatedGCN-tuned | macro F1: 0.3880±0.0040 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.