HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

NESS: Node Embeddings from Static SubGraphs

Talip Ucar

NESS: Node Embeddings from Static SubGraphs

Abstract

We present a framework for learning Node Embeddings from Static Subgraphs (NESS) using a graph autoencoder (GAE) in a transductive setting. NESS is based on two key ideas: i) Partitioning the training graph to multiple static, sparse subgraphs with non-overlapping edges using random edge split during data pre-processing, ii) Aggregating the node representations learned from each subgraph to obtain a joint representation of the graph at test time. Moreover, we propose an optional contrastive learning approach in transductive setting. We demonstrate that NESS gives a better node representation for link prediction tasks compared to current autoencoding methods that use either the whole graph or stochastic subgraphs. Our experiments also show that NESS improves the performance of a wide range of graph encoders and achieves state-of-the-art results for link prediction on multiple real-world datasets with edge homophily ratio ranging from strong heterophily to strong homophily.

Code Repositories

AstraZeneca/NESS
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
link-prediction-on-citeseerNESS
AP: 99.5
AUC: 99.43
link-prediction-on-coraNESS
AP: 98.71%
AUC: 98.46%
link-prediction-on-pubmedNESS
AP: 96.52%
AUC: 96.67%

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
NESS: Node Embeddings from Static SubGraphs | Papers | HyperAI