
摘要
图神经网络在图表示学习方面表现出色,但在异质数据和长距离依赖问题上存在困难。而图变换器通过自注意力机制解决了这些问题,但在大规模图上却面临着可扩展性和噪声挑战。为克服这些限制,我们提出了一种用于节点分类的通用模型架构——GNNMoE(Mixture of Experts in Graph Neural Networks)。该架构灵活地结合了细粒度的消息传递操作与专家混合机制来构建特征编码块。此外,通过引入软门控层和硬门控层,将最合适的专家网络分配给每个节点,从而增强了模型的表达能力和对不同类型图的适应性。我们还在GNNMoE中引入了自适应残差连接和增强的前馈神经网络模块(FFN),进一步提升了节点表示的表达能力。大量的实验结果表明,GNNMoE在各种类型的图数据上表现优异,有效缓解了过平滑问题和全局噪声,提高了模型的鲁棒性和适应性,同时在大规模图上保证了计算效率。
代码仓库
GISec-Team/GNNMoE
官方
pytorch
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| node-classification-on-actor | GNNMoE(GAT-like P) | Accuracy: 37.76±0.98 |
| node-classification-on-actor | GNNMoE(GCN-like P) | Accuracy: 37.59±1.36 |
| node-classification-on-actor | GNNMoE(SAGE-like P) | Accuracy: 37.97±1.01 |
| node-classification-on-amazon-computers-1 | GNNMoE(GCN-like P) | Accuracy: 92.17±0.50 |
| node-classification-on-amazon-computers-1 | GNNMoE(SAGE-like P) | Accuracy: 91.85±0.39 |
| node-classification-on-amazon-computers-1 | GNNMoE(GAT-like P) | Accuracy: 91.98±0.46 |
| node-classification-on-amazon-photo-1 | GNNMoE(GAT-like P) | Accuracy: 95.71±0.37 |
| node-classification-on-amazon-photo-1 | GNNMoE(GCN-like P) | Accuracy: 95.81±0.41 |
| node-classification-on-amazon-photo-1 | GNNMoE(SAGE-like P) | Accuracy: 95.46±0.24 |
| node-classification-on-chameleon-48-32-20 | GNNMoE(GAT-like P) | Accuracy: 45.56±3.94 |
| node-classification-on-chameleon-48-32-20 | GNNMoE(GCN-like P) | Accuracy: 47.19±2.93 |
| node-classification-on-chameleon-48-32-20 | GNNMoE(SAGE-like P) | Accuracy: 45.73±3.19 |
| node-classification-on-coauthor-cs | GNNMoE(SAGE-like P) | Accuracy: 95.68±0.24 |
| node-classification-on-coauthor-cs | GNNMoE(GAT-like P) | Accuracy: 95.72±0.23 |
| node-classification-on-coauthor-cs | GNNMoE(GCN-like P) | Accuracy: 95.81±0.26 |
| node-classification-on-coauthor-physics | GNNMoE(GCN-like P) | Accuracy: 97.03±0.13 |
| node-classification-on-coauthor-physics | GNNMoE(SAGE-like P) | Accuracy: 96.81±0.22 |
| node-classification-on-coauthor-physics | GNNMoE(GAT-like P) | Accuracy: 97.05±0.19 |
| node-classification-on-facebook | GNNMoE(SAGE-like P) | Accuracy: 94.63±0.36 |
| node-classification-on-facebook | GNNMoE(GCN-like P) | Accuracy: 95.53±0.35 |
| node-classification-on-facebook | GNNMoE(GAT-like P) | Accuracy: 95.21±0.25 |
| node-classification-on-penn94 | GNNMoE(GCN-like P) | Accuracy: 85.11±0.39 |
| node-classification-on-penn94 | GNNMoE(GAT-like P) | Accuracy: 81.98±0.47 |
| node-classification-on-penn94 | GNNMoE(SAGE-like P) | Accuracy: 84.05±0.37 |
| node-classification-on-roman-empire | GNNMoE(GAT-like P) | Accuracy (% ): 87.29±0.60 |
| node-classification-on-roman-empire | GNNMoE(GCN-like P) | Accuracy (% ): 85.05±0.55 |
| node-classification-on-roman-empire | GNNMoE(SAGE-like P) | Accuracy (% ): 86.00±0.45 |
| node-classification-on-squirrel-48-32-20 | GNNMoE(GAT-like P) | Accuracy: 39.19±3.94 |
| node-classification-on-squirrel-48-32-20 | GNNMoE(GCN-like P) | Accuracy: 44.02±2.59 |
| node-classification-on-squirrel-48-32-20 | GNNMoE(SAGE-like P) | Accuracy: 39.19±2.84 |
| node-classification-on-tolokers | GNNMoE(SAGE-like P) | AUCROC: 83.96±0.75 |
| node-classification-on-tolokers | GNNMoE(GAT-like P) | AUCROC: 85.45±0.94 |
| node-classification-on-tolokers | GNNMoE(GCN-like P) | AUCROC: 84.77±0.93 |