HyperAIHyperAI

Command Palette

Search for a command to run...

5 months ago

Breaking the Limit of Graph Neural Networks by Improving the Assortativity of Graphs with Local Mixing Patterns

Susheel Suresh; Vinith Budde; Jennifer Neville; Pan Li; Jianzhu Ma

Breaking the Limit of Graph Neural Networks by Improving the Assortativity of Graphs with Local Mixing Patterns

Abstract

Graph neural networks (GNNs) have achieved tremendous success on multiple graph-based learning tasks by fusing network structure and node features. Modern GNN models are built upon iterative aggregation of neighbor's/proximity features by message passing. Its prediction performance has been shown to be strongly bounded by assortative mixing in the graph, a key property wherein nodes with similar attributes mix/connect with each other. We observe that real world networks exhibit heterogeneous or diverse mixing patterns and the conventional global measurement of assortativity, such as global assortativity coefficient, may not be a representative statistic in quantifying this mixing. We adopt a generalized concept, node-level assortativity, one that is based at the node level to better represent the diverse patterns and accurately quantify the learnability of GNNs. We find that the prediction performance of a wide range of GNN models is highly correlated with the node level assortativity. To break this limit, in this work, we focus on transforming the input graph into a computation graph which contains both proximity and structural information as distinct type of edges. The resulted multi-relational graph has an enhanced level of assortativity and, more importantly, preserves rich information from the original graph. We then propose to run GNNs on this computation graph and show that adaptively choosing between structure and proximity leads to improved performance under diverse mixing. Empirically, we show the benefits of adopting our transformation framework for semi-supervised node classification task on a variety of real world graph learning benchmarks.

Code Repositories

susheels/gnns-and-local-assortativity
Official
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
node-classification-on-actorWRGAT
Accuracy: 36.53 ± 0.77
node-classification-on-chameleonWRGAT
Accuracy: 65.24 ± 0.87
node-classification-on-citeseer-48-32-20WRGAT
1:1 Accuracy: 76.81 ± 1.89
node-classification-on-cora-48-32-20-fixedWRGAT
1:1 Accuracy: 88.20 ± 2.26
node-classification-on-cornellWRGAT
Accuracy: 81.62 ± 3.90
node-classification-on-non-homophilic-10WRGAT
1:1 Accuracy: 36.53 ± 0.77 
node-classification-on-non-homophilic-11WRGAT
1:1 Accuracy: 65.24 ± 0.87 
node-classification-on-non-homophilic-12WRGAT
1:1 Accuracy: 48.85 ± 0.78
node-classification-on-non-homophilic-13WRGAT
1:1 Accuracy: 74.32 ± 0.53
node-classification-on-non-homophilic-7WRGAT
1:1 Accuracy: 81.62 ±3.90 
node-classification-on-non-homophilic-8WRGAT
1:1 Accuracy: 86.98 ± 3.78 
node-classification-on-non-homophilic-9WRGAT
1:1 Accuracy: 83.62 ± 5.50 
node-classification-on-penn94WRGAT
Accuracy: 74.32 ± 0.53
node-classification-on-pubmed-48-32-20-fixedWRGAT
1:1 Accuracy: 88.52 ± 0.92
node-classification-on-squirrelWRGAT
Accuracy: 48.85 ± 0.78
node-classification-on-texasWRGAT
Accuracy: 83.62 ± 5.50

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Breaking the Limit of Graph Neural Networks by Improving the Assortativity of Graphs with Local Mixing Patterns | Papers | HyperAI