HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis

Bingchen Liu Yizhe Zhu Kunpeng Song Ahmed Elgammal

Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis

Abstract

Training Generative Adversarial Networks (GAN) on high-fidelity images usually requires large-scale GPU-clusters and a vast number of training images. In this paper, we study the few-shot image synthesis task for GAN with minimum computing cost. We propose a light-weight GAN structure that gains superior quality on 1024*1024 resolution. Notably, the model converges from scratch with just a few hours of training on a single RTX-2080 GPU, and has a consistent performance, even with less than 100 training samples. Two technique designs constitute our work, a skip-layer channel-wise excitation module and a self-supervised discriminator trained as a feature-encoder. With thirteen datasets covering a wide variety of image domains (The datasets and code are available at: https://github.com/odegeasslbc/FastGAN-pytorch), we show our model's superior performance compared to the state-of-the-art StyleGAN2, when data and computing budget are limited.

Benchmarks

BenchmarkMethodologyMetrics
image-generation-on-ade-indoorFastGAN
FID: 30.33
image-generation-on-pokemon-1024x1024FastGAN
FID: 56.46
image-generation-on-pokemon-256x256FastGAN
FID: 81.86

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp
Towards Faster and Stabilized GAN Training for High-fidelity Few-shot Image Synthesis | Papers | HyperAI