Command Palette
Search for a command to run...
Phi Vu Tran

Abstract
We examine two fundamental tasks associated with graph representation learning: link prediction and node classification. We present a new autoencoder architecture capable of learning a joint representation of local graph structure and available node features for the simultaneous multi-task learning of unsupervised link prediction and semi-supervised node classification. Our simple, yet effective and versatile model is efficiently trained end-to-end in a single stage, whereas previous related deep graph embedding methods require multiple training steps that are difficult to optimize. We provide an empirical evaluation of our model on five benchmark relational, graph-structured datasets and demonstrate significant improvement over three strong baselines for graph representation learning. Reference code and data are available at https://github.com/vuptran/graph-representation-learning
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| link-prediction-on-citeseer | MTGAE | Accuracy: 94.90% |
| link-prediction-on-cora | MTGAE | Accuracy: 94.60% |
| link-prediction-on-pubmed | MTGAE | Accuracy: 94.40% |
| node-classification-on-citeseer | MTGAE | Accuracy: 71.80% Validation: YES |
| node-classification-on-cora | MTGAE | Accuracy: 79.00% Validation: YES |
| node-classification-on-pubmed | MTGAE | Accuracy: 80.40% Training Split: 20 per node Validation: YES |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.