HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models

George Stein Jesse C. Cresswell Rasa Hosseinzadeh Yi Sui Brendan Leigh Ross Valentin Villecroze Zhaoyan Liu Anthony L. Caterini J. Eric T. Taylor Gabriel Loaiza-Ganem

Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models

Abstract

We systematically study a wide variety of generative models spanning semantically-diverse image datasets to understand and improve the feature extractors and metrics used to evaluate them. Using best practices in psychophysics, we measure human perception of image realism for generated samples by conducting the largest experiment evaluating generative models to date, and find that no existing metric strongly correlates with human evaluations. Comparing to 17 modern metrics for evaluating the overall performance, fidelity, diversity, rarity, and memorization of generative models, we find that the state-of-the-art perceptual realism of diffusion models as judged by humans is not reflected in commonly reported metrics such as FID. This discrepancy is not explained by diversity in generated samples, though one cause is over-reliance on Inception-V3. We address these flaws through a study of alternative self-supervised feature extractors, find that the semantic information encoded by individual networks strongly depends on their training procedure, and show that DINOv2-ViT-L/14 allows for much richer evaluation of generative models. Next, we investigate data memorization, and find that generative models do memorize training examples on simple, smaller datasets like CIFAR10, but not necessarily on more complex datasets like ImageNet. However, our experiments show that current metrics do not properly detect memorization: none in the literature is able to separate memorization from other phenomena such as underfitting or mode shrinkage. To facilitate further development of generative models and their evaluation we release all generated image datasets, human evaluation data, and a modular library to compute 17 common metrics for 9 different encoders at https://github.com/layer6ai-labs/dgm-eval.

Code Repositories

gmum/PALATE
jax
Mentioned in GitHub
louaaron/scaling-riemannian-diffusion
pytorch
Mentioned in GitHub
layer6ai-labs/dgm-eval
Official
pytorch
Mentioned in GitHub
layer6ai-labs/dgm_manifold_survey
pytorch
Mentioned in GitHub

Benchmarks

BenchmarkMethodologyMetrics
image-generation-on-ffhq-256-x-256StyleGAN2-ada (Exposing)
Coverage: 0.39
Density: 0.36
FD: 514.78
FID: 5.30
Precision: 0.59
Recall: 0.06
image-generation-on-ffhq-256-x-256LDM (Exposing)
Coverage: 0.74
Density: 0.83
FD: 226.72
FID: 8.11
Precision: 0.81
Recall: 0.44
image-generation-on-ffhq-256-x-256InsGen (Exposing)
Coverage: 0.51
FD: 436.26
FID: 3.46
Precision: 0.64
Recall: 0.13
image-generation-on-ffhq-256-x-256Unleash-Trans (Exposing)
Coverage: 0.53
Density: 0.61
FD: 393.45
FID: 9.02
Precision: 0.76
Recall: 0.24
image-generation-on-ffhq-256-x-256Efficient-vdVAE (Exposing)
Coverage: 0.54
Density: 1.04
FD: 514.16
FID: 34.88
Precision: 0.86
Recall: 0.14
image-generation-on-ffhq-256-x-256StyleNAT (Exposing)
Coverage: 0.71
Density: 0.77
FD: 229.42
FID: 2.11
Precision: 0.79
Recall: 0.41
image-generation-on-ffhq-256-x-256StyleSwin (Exposing)
Coverage: 0.64
Density: 0.71
FD: 303.21
FID: 2.89
Precision: 0.79
Recall: 0.28
image-generation-on-ffhq-256-x-256StyleGAN-XL (Exposing)
Coverage: 0.63
Density: 0.68
FD: 240.07
FID: 2.26
Precision: 0.77
Recall: 0.43
image-generation-on-ffhq-256-x-256Projected-GAN (Exposing)
Coverage: 0.30
Density: 0.31
FD: 589.20
FID: 4.29
Precision: 0.57
Recall: 0.07

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Exposing flaws of generative model evaluation metrics and their unfair treatment of diffusion models | Papers | HyperAI