Command Palette
Search for a command to run...
Enhancing GAN Performance through Neural Architecture Search and Tensor Decomposition
{Raghuveer Rao Jamison Heard Majid Rabbani Sohail A. Dianat Mahsa Mozaffari Prasanna Reddy Pulakurthi}
Abstract
Generative Adversarial Networks (GANs) have emerged as a powerful tool for generating high-fidelity content. This paper presents a new training procedure that leverages Neural Architecture Search (NAS) to discover the optimal architecture for image generation while employing the Maximum Mean Discrepancy (MMD) repulsive loss for adversarial training. Moreover, the generator network is compressed using tensor decomposition to reduce its computational footprint and inference time while preserving its generative performance. Experimental results show improvements of 34% and 28% in the FID score on the CIFAR-10 and STL-10 datasets, respectively, with corresponding footprint reductions of 14× and 31× compared to the best FID score method reported in the literature. The implementation code is available at: https://github.com/PrasannaPulakurthi/MMD-AdversarialNAS.
Benchmarks
| Benchmark | Methodology | Metrics |
|---|---|---|
| image-generation-on-stl-10 | MMD-AdversarialNAS | FID: 12.91 Inception score: 11.6 Model Size (MB): 19.47 |
| image-generation-on-stl-10 | MMD-AdversarialNAS (Compressed Small) | FID: 14.84 Inception score: 11.66 Model Size (MB): 1.71 |
| image-generation-on-stl-10 | MMD-AdversarialNAS (Compressed Large) | FID: 13.06 Inception score: 11.28 Model Size (MB): 5.35 |
Build AI with AI
From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.