HyperAIHyperAI

Command Palette

Search for a command to run...

3 months ago

Improving GAN Training with Probability Ratio Clipping and Sample Reweighting

Yue Wu Pan Zhou Andrew Gordon Wilson Eric P. Xing Zhiting Hu

Improving GAN Training with Probability Ratio Clipping and Sample Reweighting

Abstract

Despite success on a wide range of problems related to vision, generative adversarial networks (GANs) often suffer from inferior performance due to unstable training, especially for text generation. To solve this issue, we propose a new variational GAN training framework which enjoys superior training stability. Our approach is inspired by a connection of GANs and reinforcement learning under a variational perspective. The connection leads to (1) probability ratio clipping that regularizes generator training to prevent excessively large updates, and (2) a sample re-weighting mechanism that improves discriminator training by downplaying bad-quality fake samples. Moreover, our variational GAN framework can provably overcome the training issue in many GANs that an optimal discriminator cannot provide any informative gradient to training generator. By plugging the training approach in diverse state-of-the-art GAN architectures, we obtain significantly improved performance over a range of tasks, including text generation, text style transfer, and image generation.

Code Repositories

Holmeswww/PPOGAN
Official
pytorch

Benchmarks

BenchmarkMethodologyMetrics
image-generation-on-cifar-10PPOGAN
FID: 10.7
text-generation-on-emnlp2017-wmtPPOGAN
BLEU-2: 0.905
BLEU-3: 0.692
BLEU-4: 0.47
BLEU-5: 0.322
NLLgen: 2.265

Build AI with AI

From idea to launch — accelerate your AI development with free AI co-coding, out-of-the-box environment and best price of GPUs.

AI Co-coding
Ready-to-use GPUs
Best Pricing
Get Started

Hyper Newsletters

Subscribe to our latest updates
We will deliver the latest updates of the week to your inbox at nine o'clock every Monday morning
Powered by MailChimp