HyperAIHyperAI

Command Palette

Search for a command to run...

4 months ago

Multi-Task Graph Autoencoders

Phi Vu Tran

Multi-Task Graph Autoencoders

Abstract

We examine two fundamental tasks associated with graph representation learning: link prediction and node classification. We present a new autoencoder architecture capable of learning a joint representation of local graph structure and available node features for the simultaneous multi-task learning of unsupervised link prediction and semi-supervised node classification. Our simple, yet effective and versatile model is efficiently trained end-to-end in a single stage, whereas previous related deep graph embedding methods require multiple training steps that are difficult to optimize. We provide an empirical evaluation of our model on five benchmark relational, graph-structured datasets and demonstrate significant improvement over three strong baselines for graph representation learning. Reference code and data are available at https://github.com/vuptran/graph-representation-learning

Code Repositories

vuptran/graph-representation-learning
Official
tf
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
link-prediction-on-citeseerMTGAE
Accuracy: 94.90%
link-prediction-on-coraMTGAE
Accuracy: 94.60%
link-prediction-on-pubmedMTGAE
Accuracy: 94.40%
node-classification-on-citeseerMTGAE
Accuracy: 71.80%
Validation: YES
node-classification-on-coraMTGAE
Accuracy: 79.00%
Validation: YES
node-classification-on-pubmedMTGAE
Accuracy: 80.40%
Training Split: 20 per node
Validation: YES

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp