Graph Property Prediction On Ogbg Molhiv

评估指标

Test ROC-AUC
Validation ROC-AUC

评测结果

各个模型在此基准测试上的表现结果

Paper TitleRepository
HyperFusion0.8475 ± 0.00030.8275 ± 0.0008--
PAS+FPs0.8420 ± 0.00150.8238 ± 0.0028--
HIG0.8403 ± 0.00210.8176 ± 0.0034--
DeepAUC0.8352 ± 0.00540.8238 ± 0.0061Large-scale Robust Deep AUC Maximization: A New Surrogate Loss and Empirical Studies on Medical Image Classification
FingerPrint+GMAN0.8244 ± 0.00330.8329 ± 0.0039--
Neural FingerPrints0.8232 ± 0.00470.8331 ± 0.0054Molecular Representation Learning by Leveraging Chemical Information-
Graphormer + FPs0.8225 ± 0.00010.8396 ± 0.0001Do Transformers Really Perform Bad for Graph Representation?
Molecular FP + Random Forest0.8208 ± 0.00370.8036 ± 0.0059--
GPTrans-B0.8126 ± 0.0032-Graph Propagation Transformer for Graph Representation Learning
CIN0.8094 ± 0.00570.8277 ± 0.0099Weisfeiler and Lehman Go Cellular: CW Networks
GSAT0.8067 ± 0.09500.8347 ± 0.0031Interpretable and Generalizable Graph Learning via Stochastic Attention Mechanism
EGT0.806 ± 0.0065-Global Self-Attention as a Replacement for Graph Convolution
MorganFP+Rand. Forest0.8060 ± 0.00100.8420 ± 0.0030--
CIN-small0.8055 ± 0.01040.8310 ± 0.0102Weisfeiler and Lehman Go Cellular: CW Networks
Graphormer0.8051 ± 0.00530.8310 ± 0.0089Do Transformers Really Perform Bad for Graph Representation?
Graphormer (pre-trained on PCQM4M)0.8051 ± 0.00530.8310 ± 0.0089Do Transformers Really Perform Bad for Graph Representation?
GatedGCN+0.8040 ± 0.01640.8329 ± 0.0158Can Classic GNNs Be Strong Baselines for Graph-level Tasks? Simple Architectures Meet Excellence
directional GSN0.8039 ± 0.00900.8473 ± 0.0096Improving Graph Neural Network Expressivity via Subgraph Isomorphism Counting
P-WL0.8039 ± 0.00400.8279 ± 0.0059A Persistent Weisfeiler–Lehman Procedure for Graph Classification-
Nested GIN+virtual node (ens)0.7986 ± 0.01050.8080 ± 0.0278Nested Graph Neural Networks
0 of 43 row(s) selected.
Graph Property Prediction On Ogbg Molhiv | SOTA | HyperAI超神经