Image Generation On Celeba Hq 256X256

评估指标

FID

评测结果

各个模型在此基准测试上的表现结果

Paper TitleRepository
GLOW68.93Glow: Generative Flow with Invertible 1x1 Convolutions
VAEBM20.38VAEBM: A Symbiosis between Variational Autoencoders and Energy-based Models
Dual-MCMC EBM15.89Learning Energy-based Model via Dual-MCMC Teaching-
DC-VAE15.81Dual Contradistinctive Generative Autoencoder-
VQGAN+Transformer10.2Taming Transformers for High-Resolution Image Synthesis
Joint-EBM9.89Learning Joint Latent Space EBM Prior Model for Multi-layer Generator-
Diffusion-JEBM8.78Learning Latent Space Hierarchical EBM Diffusion Models-
DDMI8.73DDMI: Domain-Agnostic Latent Diffusion Models for Synthesizing High-Quality Implicit Neural Representations
DDGAN7.64Tackling the Generative Learning Trilemma with Denoising Diffusion GANs
LSGM7.22Score-based Generative Modeling in Latent Space
UNCSN++ (RVE) + ST7.16Soft Truncation: A Universal Training Technique of Score-based Diffusion Model for High Precision Score Estimation
WaveDiff5.94Wavelet Diffusion Models are fast and scalable Image Generators
RDUOT5.6A High-Quality Robust Diffusion Framework for Corrupted Dataset
LFM5.26Flow Matching in Latent Space
LDM-45.11High-Resolution Image Synthesis with Latent Diffusion Models
StyleSwin3.25StyleSwin: Transformer-based GAN for High-resolution Image Generation
RDM3.15Relay Diffusion: Unifying diffusion process across resolutions for image synthesis
BOSS-Bellman Optimal Stepsize Straightening of Flow-Matching Models
RNODE-How to train your neural ODE: the world of Jacobian and kinetic regularization
0 of 19 row(s) selected.
Image Generation On Celeba Hq 256X256 | SOTA | HyperAI超神经