HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Adversarial Score identity Distillation: Rapidly Surpassing the Teacher in One Step

Mingyuan Zhou Huangjie Zheng Yi Gu Zhendong Wang Hai Huang

Adversarial Score identity Distillation: Rapidly Surpassing the Teacher in One Step

Abstract

Score identity Distillation (SiD) is a data-free method that has achieved SOTA performance in image generation by leveraging only a pretrained diffusion model, without requiring any training data. However, its ultimate performance is constrained by how accurate the pretrained model captures the true data scores at different stages of the diffusion process. In this paper, we introduce SiDA (SiD with Adversarial Loss), which not only enhances generation quality but also improves distillation efficiency by incorporating real images and adversarial loss. SiDA utilizes the encoder from the generator's score network as a discriminator, allowing it to distinguish between real images and those generated by SiD. The adversarial loss is batch-normalized within each GPU and then combined with the original SiD loss. This integration effectively incorporates the average "fakeness" per GPU batch into the pixel-based SiD loss, enabling SiDA to distill a single-step generator. SiDA converges significantly faster than its predecessor when distilled from scratch, and swiftly improves upon the original model's performance during fine-tuning from a pre-distilled SiD generator. This one-step adversarial distillation method establishes new benchmarks in generation performance when distilling EDM diffusion models, achieving FID scores of 1.110 on ImageNet 64x64. When distilling EDM2 models trained on ImageNet 512x512, our SiDA method surpasses even the largest teacher model, EDM2-XXL, which achieved an FID of 1.81 using classifier-free guidance (CFG) and 63 generation steps. In contrast, SiDA achieves FID scores of 2.156 for size XS, 1.669 for S, 1.488 for M, 1.413 for L, 1.379 for XL, and 1.366 for XXL, all without CFG and in a single generation step. These results highlight substantial improvements across all model sizes. Our code is available at https://github.com/mingyuanzhou/SiD/tree/sida.

Code Repositories

mingyuanzhou/sid-lsg
pytorch
Mentioned in GitHub
mingyuanzhou/sid
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
image-generation-on-afhq-v2-64x64SiDA-EDM
FID: 1.276
NFE: 1
image-generation-on-cifar-10SiDA-EDM
FID: 1.396
NFE: 1
image-generation-on-ffhq-64x64SiDA-EDM
FID: 1.040
NFE: 1
image-generation-on-imagenet-512x512SiD-EDM2-M (498M)
FID: 2.06
NFE: 1
image-generation-on-imagenet-512x512SiDA-EDM2-M (498M)
FID: 1.488
NFE: 1
image-generation-on-imagenet-512x512SiDA-EDM2-L (777M)
FID: 1.413
NFE: 1
image-generation-on-imagenet-512x512SiD-EDM2-XS (125M)
FID: 3.353
NFE: 1
image-generation-on-imagenet-512x512SiDA-EDM2-XL (1.1B)
FID: 1.379
NFE: 1
image-generation-on-imagenet-512x512SiD-EDM2-S (280M)
FID: 2.707
NFE: 1
image-generation-on-imagenet-512x512SiDA-EDM2-XS (125M)
FID: 2.156
NFE: 1
image-generation-on-imagenet-512x512SiD-EDM2-XXL (1.5B)
FID: 1.969
NFE: 1
image-generation-on-imagenet-512x512SiDA-EDM2-XXL (1.5B)
FID: 1.366
NFE: 1
image-generation-on-imagenet-512x512SiDA-EDM2-S (280M)
FID: 1.669
NFE: 1
image-generation-on-imagenet-512x512SiD-EDM2-L (777M)
FID: 1.907
NFE: 1
image-generation-on-imagenet-512x512SiD-EDM2-XL (1.1B)
FID: 1.888
NFE: 1
image-generation-on-imagenet-64x64SiDA-EDM
FID: 1.11
NFE: 1

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Adversarial Score identity Distillation: Rapidly Surpassing the Teacher in One Step | Papers | HyperAI