Command Palette
Search for a command to run...
Choi Yunjey ; Uh Youngjung ; Yoo Jaejun ; Ha Jung-Woo

Abstract
A good image-to-image translation model should learn a mapping betweendifferent visual domains while satisfying the following properties: 1)diversity of generated images and 2) scalability over multiple domains.Existing methods address either of the issues, having limited diversity ormultiple models for all domains. We propose StarGAN v2, a single framework thattackles both and shows significantly improved results over the baselines.Experiments on CelebA-HQ and a new animal faces dataset (AFHQ) validate oursuperiority in terms of visual quality, diversity, and scalability. To betterassess image-to-image translation models, we release AFHQ, high-quality animalfaces with large inter- and intra-domain differences. The code, pretrainedmodels, and dataset can be found at https://github.com/clovaai/stargan-v2.
Code Repositories
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| fundus-to-angiography-generation-on-fundus | StarGAN-v2 | FID: 27.7 Kernel Inception Distance: 0.00118 |
| image-to-image-translation-on-afhq | StarGAN v2 | FID: 24.4 LPIPS: 0.524 |
| image-to-image-translation-on-celeba-hq | StarGAN v2 | FID: 13.73 LPIPS: 0.428 |
| multimodal-unsupervised-image-to-image-4 | StarGAN v2 | FID: 13.73 |
| multimodal-unsupervised-image-to-image-5 | StarGAN v2 | FID: 16.2 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.