Command Palette
Search for a command to run...
MGDCF: Distance Learning via Markov Graph Diffusion for Neural Collaborative Filtering
Jun Hu Bryan Hooi Shengsheng Qian Quan Fang Changsheng Xu

Abstract
Graph Neural Networks (GNNs) have recently been utilized to build Collaborative Filtering (CF) models to predict user preferences based on historical user-item interactions. However, there is relatively little understanding of how GNN-based CF models relate to some traditional Network Representation Learning (NRL) approaches. In this paper, we show the equivalence between some state-of-the-art GNN-based CF models and a traditional 1-layer NRL model based on context encoding. Based on a Markov process that trades off two types of distances, we present Markov Graph Diffusion Collaborative Filtering (MGDCF) to generalize some state-of-the-art GNN-based CF models. Instead of considering the GNN as a trainable black box that propagates learnable user/item vertex embeddings, we treat GNNs as an untrainable Markov process that can construct constant context features of vertices for a traditional NRL model that encodes context features with a fully-connected layer. Such simplification can help us to better understand how GNNs benefit CF models. Especially, it helps us realize that ranking losses play crucial roles in GNN-based CF tasks. With our proposed simple yet powerful ranking loss InfoBPR, the NRL model can still perform well without the context features constructed by GNNs. We conduct experiments to perform detailed analysis on MGDCF.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| recommendation-systems-on-amazon-book | MGDCF | Recall@20: 0.0566 nDCG@20: 0.0460 |
| recommendation-systems-on-gowalla | MGDCF | Recall@20: 0.1864 nDCG@20: 0.1589 |
| recommendation-systems-on-yelp2018 | MGDCF | NDCG@20: 0.0575 Recall@20: 0.0699 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.