Command Palette
Search for a command to run...
Tim R. Davidson; Luca Falorsi; Nicola De Cao; Thomas Kipf; Jakub M. Tomczak

Abstract
The Variational Auto-Encoder (VAE) is one of the most used unsupervised machine learning models. But although the default choice of a Gaussian distribution for both the prior and posterior represents a mathematically convenient distribution often leading to competitive results, we show that this parameterization fails to model data with a latent hyperspherical structure. To address this issue we propose using a von Mises-Fisher (vMF) distribution instead, leading to a hyperspherical latent space. Through a series of experiments we show how such a hyperspherical VAE, or $\mathcal{S}$-VAE, is more suitable for capturing data with a hyperspherical latent structure, while outperforming a normal, $\mathcal{N}$-VAE, in low dimensions on other data types. Code at http://github.com/nicola-decao/s-vae-tf and https://github.com/nicola-decao/s-vae-pytorch
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| link-prediction-on-citeseer | S-VGAE | AP: 95.2 AUC: 94.7 |
| link-prediction-on-cora | S-VGAE | AP: 94.1% AUC: 94.1% |
| link-prediction-on-pubmed | S-VGAE | AP: 96.0% AUC: 96.0% |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.