
摘要
我们提出了一种基于Transformer的图神经网络模型,命名为UGformer,用于学习图结构表示。具体而言,本文介绍了UGformer的两种变体:第一种变体(于2019年9月发布)在每个输入节点的采样邻居集合上应用Transformer;第二种变体(于2021年5月发布)则在所有输入节点上应用Transformer。实验结果表明,第一种UGformer变体在图分类任务的归纳设置(inductive setting)以及无监督直推设置(unsupervised transductive setting)下的基准数据集上均达到了当前最优的准确率;第二种UGformer变体在归纳文本分类任务中也取得了当前最优的性能表现。代码已公开,获取地址为:\url{https://github.com/daiquocnguyen/Graph-Transformer}。
代码仓库
daiquocnguyen/Graph-Transformer
官方
tf
GitHub 中提及
基准测试
| 基准 | 方法 | 指标 |
|---|---|---|
| graph-classification-on-collab | U2GNN (Unsupervised) | Accuracy: 95.62% |
| graph-classification-on-collab | U2GNN | Accuracy: 77.84% |
| graph-classification-on-dd | U2GNN (Unsupervised) | Accuracy: 95.67% |
| graph-classification-on-dd | U2GNN | Accuracy: 80.23% |
| graph-classification-on-imdb-b | U2GNN | Accuracy: 77.04% |
| graph-classification-on-imdb-b | U2GNN (Unsupervised) | Accuracy: 96.41% |
| graph-classification-on-imdb-m | U2GNN (Unsupervised) | Accuracy: 89.2% |
| graph-classification-on-imdb-m | U2GNN | Accuracy: 53.60% |
| graph-classification-on-mutag | U2GNN (Unsupervised) | Accuracy: 88.47% |
| graph-classification-on-mutag | U2GNN | Accuracy: 89.97% |
| graph-classification-on-proteins | U2GNN | Accuracy: 78.53% |
| graph-classification-on-proteins | U2GNN (Unsupervised) | Accuracy: 80.01% |
| graph-classification-on-ptc | U2GNN (Unsupervised) | Accuracy: 91.81% |
| graph-classification-on-ptc | U2GNN | Accuracy: 69.63% |