论文《Improved Training of Wasserstein GANs》摘要:
Generative Adversarial Networks (GANs) are powerful generative models, but suffer from training instability. The recently proposed Wasserstein GAN (WGAN) makes significant progress toward stable training of GANs, but can still generate low-quality samples or fail to converge in some settings. We find that these training failures are often due to the use of weight clipping in WGAN to enforce a Lipschitz constraint on the critic, which can lead to pathological behavior. We propose an alternative method for enforcing the Lipschitz constraint: instead of clipping weights, penalize the norm of the gradient of the critic with respect to its input. Our proposed method converges faster and generates higher-quality samples than WGAN with weight clipping. Finally, our method enables very stable GAN training: for the first time, we can train a wide variety of GAN architectures with almost no hyperparameter tuning, including 101-layer ResNets and language models over discrete data.
论文链接:
https://arxiv.org/abs/1704.00028
代码链接:
https://github.com/igul222/improved_wgan_training
原文链接:
http://weibo.com/1402400261/EDdojryZI?from=page_1005051402400261_profile&wvr=6&mod=weibotime&type=comment