Command Palette
Search for a command to run...
Diederik P. Kingma; Prafulla Dhariwal

Abstract
Flow-based generative models (Dinh et al., 2014) are conceptually attractive due to tractability of the exact log-likelihood, tractability of exact latent-variable inference, and parallelizability of both training and synthesis. In this paper we propose Glow, a simple type of generative flow using an invertible 1x1 convolution. Using our method we demonstrate a significant improvement in log-likelihood on standard benchmarks. Perhaps most strikingly, we demonstrate that a generative model optimized towards the plain log-likelihood objective is capable of efficient realistic-looking synthesis and manipulation of large images. The code for our model is available at https://github.com/openai/glow
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| density-estimation-on-imagenet-32x32-1 | Glow | NLL (bits/dim): 4.09 |
| image-generation-on-celeba-256x256 | Glow (Kingma and Dhariwal, 2018) | bpd: 1.03 |
| image-generation-on-celeba-hq-256x256 | GLOW | FID: 68.93 |
| image-generation-on-imagenet-32x32 | Glow (Kingma and Dhariwal, 2018) | bpd: 4.09 |
| image-generation-on-imagenet-64x64 | Glow (Kingma and Dhariwal, 2018) | Bits per dim: 3.81 |
| unsupervised-anomaly-detection-on-smap | Glow | AUC: 91.55 F1: 86.05 Precision: 87.40 Recall: 84.93 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.