专栏名称: 机器学习研究会
机器学习研究会是北京大学大数据与机器学习创新中心旗下的学生组织,旨在构建一个机器学习从事者交流的平台。除了及时分享领域资讯外,协会还会举办各种业界巨头/学术神牛讲座、学术大牛沙龙分享会、real data 创新竞赛等活动。
目录
相关文章推荐
逻辑挖掘社  ·  AI国产替代新方向! ·  2 天前  
逻辑挖掘社  ·  AI国产替代新方向! ·  2 天前  
宝玉xp  ·  转发微博-20241226062500 ·  2 天前  
爱可可-爱生活  ·  //@爱可可-爱生活:今日开奖,欢迎参与~- ... ·  3 天前  
51好读  ›  专栏  ›  机器学习研究会

【推荐】神奇的对抗生成网络GANs(综述)

机器学习研究会  · 公众号  · AI  · 2017-03-21 19:07

正文



点击上方“机器学习研究会”可以订阅哦


摘要
 

转自:爱可可-爱生活

Have you ever wanted to know about Generative Adversarial Networks (GANs)? Maybe you just want to catch up on the topic? Or maybe you simply want to see how these networks have been refined over these last years? Well, in these cases, this post might interest you!


What this post is not about

First things first, this is what you won’t find in this post:

  • Complex technical explanations

  • Code (there are links to code for those interested, though)

  • An exhaustive research list (you can already find it here)


What this post is about

  • A summary of relevant topics about GANs

  • A lot of links to other sites, posts and articles so you can decide where to focus on


Index

  1. Understanding GANs

  2. GANs: the evolution

    1. DCGANs

    2. Improved DCGANs

    3. Conditional GANs

    4. InfoGANs

    5. Wasserstein GANs

  3. Closing


Understanding GANs

If you are familiar with GANs you can probably skip this section.


If you are reading this, chances are that you have heard GANs are pretty promising. Is the hype justified? This is what Yann LeCun, director of Facebook AI, thinks about them:

“Generative Adversarial Networks is the most interesting idea in the last ten years in machine learning.”


I personally think that GANs have a huge potential but we still have a lot to figure out.

So, what are GANs? I’m going to describe them very briefly. In case you are not familiar with them and want to know more, there are a lot of great sites with good explanations. As a personal recommendation, I like the ones from Eric Jang and Brandon Amos.

GANs — originally proposed by Ian Goodfellow — have two networks, a generator and a discriminator. They are both trained at the same time and compete again each other in a minimax game. The generator is trained to fool the discriminator creating realistic images, and the discriminator is trained not to be fooled by the generator.

GAN training overview.

At first, the generator generates images. It does this by sampling a vector noise Z from a simple distribution (e.g. normal) and then upsampling this vector up to an image. In the first iterations, these images will look very noisy. Then, the discriminator is given fake and real images and learns to distinguish them. The generator later receives the “feedback” of the discriminator through a backpropagation step, becoming better at generating images. At the end, we want that the distribution of fake images is as close as possible to the distribution of real images. Or, in simple words, we want fake images to look as plausible as possible.

It is worth mentioning that due to the minimax optimization used in GANs, the training might be quite unstable. There are some hacks, though, that you can use for a more robust training.


链接:

http://guimperarnau.com/blog/2017/03/Fantastic-GANs-and-where-to-find-them


原文链接:

http://weibo.com/1402400261/EALoTcZWg?from=page_1005051402400261_profile&wvr=6&mod=weibotime&type=comment#_rnd1490094107765

“完整内容”请点击【阅读原文】
↓↓↓