
摘要
本文研究了一种名为Dropout图神经网络(DropGNNs)的新方法,旨在克服标准图神经网络(GNN)框架的局限性。在DropGNNs中,我们对输入图执行多次GNN运行,每次运行中以随机且独立的方式丢弃部分节点。随后,将这些运行的结果进行融合,以获得最终的输出。我们证明,DropGNNs能够区分那些无法通过消息传递机制的GNN所区分的各类图邻域结构。本文推导出确保丢弃分布可靠的运行次数的理论边界,并证明了关于DropGNNs表达能力及其局限性的若干重要性质。通过实验验证了所提出的表达能力理论结果的正确性。此外,我们还表明,DropGNNs在多个标准GNN基准测试中表现出具有竞争力的性能。
代码仓库
karolismart/dropgnn
官方
pytorch
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| graph-classification-on-dd | DropGIN | Accuracy: 78.151±3.711 |
| graph-classification-on-enzymes | DropGIN | Accuracy: 65.128±4.117 |
| graph-classification-on-imdb-b | DropGIN | Accuracy: 75.7% |
| graph-classification-on-imdb-m | DropGIN | Accuracy: 51.4% |
| graph-classification-on-mutag | DropGIN | Accuracy: 90.4% |
| graph-classification-on-nci1 | DropGIN | Accuracy: 84.331±1.564 |
| graph-classification-on-nci109 | DropGIN | Accuracy: 83.961±1.141 |
| graph-classification-on-proteins | DropGIN | Accuracy: 76.3% |
| graph-classification-on-ptc | DropGIN | Accuracy: 66.3% |
| graph-regression-on-esr2 | DropGIN | R2: 0.675±0.000 RMSE: 0.503±0.675 |
| graph-regression-on-f2 | DropGIN | R2: 0.886±0.000 RMSE: 0.343±0.886 |
| graph-regression-on-kit | GINDrop | R2: 0.835±0.000 RMSE: 0.441±0.835 |
| graph-regression-on-lipophilicity | DropGIN | R2: 0.809±0.008 RMSE: 0.552±0.012 |
| graph-regression-on-parp1 | DropGIN | R2: 0.920±0.000 RMSE: 0.354±0.920 |
| graph-regression-on-pgr | GINDrop | R2: 0.702±0.000 RMSE: 0.527±0.702 |
| molecular-property-prediction-on-esol | DropGIN | R2: 0.935±0.012 RMSE: 0.520±0.048 |
| molecular-property-prediction-on-freesolv | DropGIN | R2: 0.972±0.005 RMSE: 0.657±0.059 |