
摘要
近年来,针对图结构数据表示学习的各种图神经网络(GNNs)框架得到了快速发展。这些框架依赖于聚合和迭代机制来学习节点的表示。然而,在学习过程中,节点之间的信息不可避免地会有所损失。为了减少这种损失,我们通过探索互信息方法中的聚合和迭代机制对现有的GNNs框架进行了扩展。我们提出了一种新的方法,即在GNNs的聚合过程中扩大常规邻域,以最大化互信息。基于在多个基准数据集上进行的一系列实验,我们展示了所提出的这种方法在四种类型的图任务中提升了现有最佳性能,包括监督和半监督图分类、图链接预测以及图边生成与分类。
代码仓库
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| graph-classification-on-20news | sKNN-LDS | Accuracy: 47.9 |
| graph-classification-on-cancer | sKNN-LDS | Accuracy: 95.7 |
| graph-classification-on-citeseer | sKNN-LDS | Accuracy: 73.7 |
| graph-classification-on-collab | sGIN | Accuracy: 80.71% |
| graph-classification-on-cora | sKNN-LDS | Accuracy: 72.3 |
| graph-classification-on-digits | sKNN-LDS | Accuracy: 92.5 |
| graph-classification-on-imdb-b | sGIN | Accuracy: 77.94% |
| graph-classification-on-imdb-m | sGIN | Accuracy: 54.52% |
| graph-classification-on-mutag | sGIN | Accuracy: 94.14% |
| graph-classification-on-nci1 | sGIN | Accuracy: 83.85% |
| graph-classification-on-proteins | sGIN | Accuracy: 78.97% |
| graph-classification-on-ptc | sGIN | Accuracy: 73.56% |
| graph-classification-on-wine | sKNN-LDS | Accuracy: 98 |
| link-prediction-on-pubmed | sGraphite-VAE | AP: 96.3% AUC: 94.8% |