专栏名称: 机器之心
专业的人工智能媒体和产业服务平台
目录
相关文章推荐
量子位  ·  高通最新5G芯片,AI爆了 ·  昨天  
黄建同学  ·  马斯克果然不差钱。Grok API ... ·  昨天  
爱可可-爱生活  ·  快速消除LASSO估计的偏差 查看图片 ... ·  昨天  
黄建同学  ·  Figure 创始人Brett ... ·  2 天前  
河南日报  ·  AI神器再升级!这次有点猛→ ·  3 天前  
河南日报  ·  AI神器再升级!这次有点猛→ ·  3 天前  
51好读  ›  专栏  ›  机器之心

学界 | 2012-2016 年被引用次数最多的深度学习论文

机器之心  · 公众号  · AI  · 2017-02-17 12:06

正文

选自Github

作者:Terry Taewoong Um

机器之心编译

参与:吴攀


近些年来在深度学习热潮的推动下,人工智能领域的研究犹如机器之心的吉祥物土拨鼠在春天里一样不断涌现,到今天,一个人要阅读了解这一领域的所有研究已经不再具有任何实践的可能性。择其善者而读之已经成为了人工智能研究者的一项重要技能,而其中非常值得关注的一项指标就是论文的引用次数,尤其是近期的引用次数。


滑铁卢大学博士、GitHub 用户 Terry Taewoong Um 就希望能在这方面做出贡献,他在 GitHub 上创建了一个项目,罗列了自 2012 年以来被引用最多的深度学习论文。


项目地址:https://github.com/terryum/awesome-deep-learning-papers


这是一个持续更新的项目。机器之心曾在 2016 年 6 月编译发表过这个项目之前的一个版本《 学界 | 2010-2016 年被引用次数最多的深度学习论文(附论文下载) 》。近日,这个项目再次进行了更新,下面我们就来看看被引用最多的论文有哪些。


背景及相关资源


在这份榜单前后,也有一些其它很棒的深度学习榜单,比如:


  • 深度视觉:https://github.com/kjw0612/awesome-deep-vision

  • 循环神经网络:https://github.com/kjw0612/awesome-rnn

  • 深度学习阅读路线图:https://github.com/songrotek/Deep-Learning-Papers-Reading-Roadmap


但要看完这些榜单中提及的内容就已经很困难了,更不要说还有更多不在这些列表中的内容。所以我在这里推出了深度学习论文百强列表,希望能对想要整体了解深度学习研究的人提供帮助。


评选说明


1. 这份深度学习论文百强列表的论文来自 2012-2016 年之间发表的论文。

2. 如果一篇论文被加入到了这个列表,那么就必然会有一篇论文被移出这个列表(因此,移出论文和加入论文一样都是对这份列表的贡献。

3. 重要但是却无法被包含进这份列表的论文会收纳到 More than Top 100 列表。

4.New Papers 和 Old Papers 分别包含了最近 6 个月和 2012 年之前发表的论文。


评选标准


  • 小于 6 个月:New Papers,按讨论加入

  • 2016 年:至少 60 次引用

  • 2015 年:至少 200 次引用

  • 2014 年:至少 400 次引用

  • 2013 年:至少 600 次引用

  • 2012 年:至少 800 次引用

  • 2012 年之前:Old Papers,按讨论加入


请注意我们更偏爱开创性的可以应用于多种研究的深度学习论文,而非应用论文。基于这样的原因,一些满足评选标准的论文也被排除了。具体还要依赖该论文的影响、这一领域其它研究的稀缺性等等。


内容目录


  • 理解/泛化/迁移

  • 优化/训练技术

  • 无监督/生成模型

  • 卷积网络模型

  • 图像分割/目标检测

  • 图像/视频等

  • 循环神经网络模型

  • 自然语言处理

  • 语音/其它领域

  • 强化学习

  • 2016 年其它论文


其它看点


  • 新论文(New Papers)

  • 旧论文(Old Papers)

  • HW/SW/数据集:技术报告

  • 书籍/调查/概述


理解/泛化/迁移


  • Distilling the knowledge in a neural network (2015), G. Hinton et al. [pdf]

  • Deep neural networks are easily fooled: High confidence predictions for unrecognizable images (2015), A. Nguyen et al. [pdf]

  • How transferable are features in deep neural networks? (2014), J. Yosinski et al. [pdf]

  • CNN features off-the-Shelf: An astounding baseline for recognition (2014), A. Razavian et al. [pdf]

  • Learning and transferring mid-Level image representations using convolutional neural networks (2014), M. Oquab et al. [pdf]

  • Visualizing and understanding convolutional networks (2014), M. Zeiler and R. Fergus [pdf]

  • Decaf: A deep convolutional activation feature for generic visual recognition (2014), J. Donahue et al. [pdf]


重要研究者:Geoffrey Hinton, Yoshua Bengio, Jason Yosinski


优化/训练技术


  • Batch normalization: Accelerating deep network training by reducing internal covariate shift (2015), S. Loffe and C. Szegedy [pdf]

  • Delving deep into rectifiers: Surpassing human-level performance on imagenet classification (2015), K. He et al. [pdf]

  • Dropout: A simple way to prevent neural networks from overfitting (2014), N. Srivastava et al. [pdf]

  • Adam: A method for stochastic optimization (2014), D. Kingma and J. Ba [pdf]

  • Improving neural networks by preventing co-adaptation of feature detectors (2012), G. Hinton et al. [pdf]

  • Random search for hyper-parameter optimization (2012) J. Bergstra and Y. Bengio [pdf]


重要研究者:Geoffrey Hinton, Yoshua Bengio, Christian Szegedy, Sergey Ioffe, Kaming He, Diederik P. Kingma


无监督/生成模型


  • Pixel recurrent neural networks (2016), A. Oord et al. [pdf]

  • Improved techniques for training GANs (2016), T. Sallmans et al. [pdf]

  • Unsupervised representation learning with deep convolutional generative adversarial networks (2015), A. Radford et al. [pdf]

  • DRAW: A recurrent neural network for image generation (2015), K. Gregor et al. [pdf]

  • Generative adversarial nets (2014), I. Goodfellow et al. [pdf]

  • Auto-encoding variational Bayes (2013), D. Kingma and M. Welling [pdf]

  • Building high-level features using large scale unsupervised learning (2013), Q. Le et al. [pdf]


重要研究者:Yoshua Bengio, Ian Goodfellow, Alex Graves


卷积网络模型


  • Rethinking the inception architecture for computer vision (2016), C. Szegedy et al.

  • Inception-v4, inception-resnet and the impact of residual connections on learning (2016), C. Szegedy et al.

  • Identity Mappings in Deep Residual Networks (2016), K. He et al.

  • Deep residual learning for image recognition (2016), K. He et al.

  • Going deeper with convolutions (2015), C. Szegedy et al.

  • Very deep convolutional networks for large-scale image recognition (2014), K. Simonyan and A. Zisserman

  • Spatial pyramid pooling in deep convolutional networks for visual recognition (2014), K. He et al.

  • Return of the devil in the details: delving deep into convolutional nets (2014), K. Chatfield et al.

  • OverFeat: Integrated recognition, localization and detection using convolutional networks (2013), P. Sermanet et al.

  • Maxout networks (2013), I. Goodfellow et al.

  • Network in network (2013), M. Lin et al.

  • ImageNet classification with deep convolutional neural networks (2012), A. Krizhevsky et al.


重要研究者:Christian Szegedy, Kaming He, Shaoqing Ren, Jian Sun, Geoffrey Hinton, Yoshua Bengio, Yann LeCun


图像分割/目标检测


  • You only look once: Unified, real-time object detection (2016), J. Redmon et al.

  • Region-based convolutional networks for accurate object detection and segmentation (2016), R. Girshick et al.

  • Fully convolutional networks for semantic segmentation (2015), J. Long et al.

  • Faster R-CNN: Towards Real-Time Object Detection with Region Proposal Networks (2015), S. Ren et al.

  • Fast R-CNN (2015), R. Girshick

  • Rich feature hierarchies for accurate object detection and semantic segmentation (2014), R. Girshick et al.

  • Semantic image segmentation with deep convolutional nets and fully connected CRFs, L. Chen et al.

  • Learning hierarchical features for scene labeling (2013), C. Farabet et al.


重要研究者:Ross Girshick, Jeff Donahue, Trevor Darrell


图像/视频/ETC


  • Image Super-Resolution Using Deep Convolutional Networks (2016), C. Dong et al.

  • A neural algorithm of artistic style (2015), L. Gatys et al.

  • Deep visual-semantic alignments for generating image descriptions (2015), A. Karpathy and L. Fei-Fei

  • Show, attend and tell: Neural image caption generation with visual attention (2015), K. Xu et al.

  • Show and tell: A neural image caption generator (2015), O. Vinyals et al.

  • Long-term recurrent convolutional networks for visual recognition and description (2015), J. Donahue et al.

  • VQA: Visual question answering (2015), S. Antol et al.

  • DeepFace: Closing the gap to human-level performance in face verification (2014), Y. Taigman et al.

  • Large-scale video classification with convolutional neural networks (2014), A. Karpathy et al.

  • DeepPose: Human pose estimation via deep neural networks (2014), A. Toshev and C. Szegedy

  • Two-stream convolutional networks for action recognition in videos (2014), K. Simonyan et al.

  • 3D convolutional neural networks for human action recognition (2013), S. Ji et al.


重要研究者:Oriol Vinyals, Andrej Karpathy


循环神经网络模型


  • Conditional random fields as recurrent neural networks (2015), S. Zheng and S. Jayasumana.

  • Memory networks (2014), J. Weston et al.

  • Neural turing machines (2014), A. Graves et al.

  • Generating sequences with recurrent neural networks (2013), A. Graves.  [Key researchers] Alex Graves


自然语言处理


  • A character-level decoder without explicit segmentation for neural machine translation (2016), J. Chung et al.

  • Exploring the limits of language modeling (2016), R. Jozefowicz et al.

  • Teaching machines to read and comprehend (2015), K. Hermann et al.

  • Effective approaches to attention-based neural machine translation (2015), M. Luong et al.

  • Neural machine translation by jointly learning to align and translate (2014), D. Bahdanau et al.

  • Sequence to sequence learning with neural networks (2014), I. Sutskever et al.

  • Learning phrase representations using RNN encoder-decoder for statistical machine translation (2014), K. Cho et al.

  • A convolutional neural network for modelling sentences (2014), N. Kalchbrenner et al.

  • Convolutional neural networks for sentence classification (2014), Y. Kim

  • Glove: Global vectors for word representation (2014), J. Pennington et al.

  • Distributed representations of sentences and documents (2014), Q. Le and T. Mikolov

  • Distributed representations of words and phrases and their compositionality (2013), T. Mikolov et al.

  • Efficient estimation of word representations in vector space (2013), T. Mikolov et al.

  • Recursive deep models for semantic compositionality over a sentiment treebank (2013), R. Socher et al.


重要研究者:Kyunghyun Cho, Oriol Vinyals, Richard Socher, Tomas Mikolov, Christopher D. Manning, Yoshua Bengio


语音/其它领域


  • End-to-end attention-based large vocabulary speech recognition (2016), D. Bahdanau et al.

  • Deep speech 2: End-to-end speech recognition in English and Mandarin (2015), D. Amodei et al.

  • Speech recognition with deep recurrent neural networks (2013), A. Graves

  • Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups (2012), G. Hinton et al.

  • Context-dependent pre-trained deep neural networks for large-vocabulary speech recognition (2012) G. Dahl et al.

  • Acoustic modeling using deep belief networks (2012), A. Mohamed et al.


重要研究者:Alex Graves, Geoffrey Hinton, Dong Yu


强化学习


  • End-to-end training of deep visuomotor policies (2016), S. Levine et al.

  • Learning Hand-Eye Coordination for Robotic Grasping with Deep Learning and Large-Scale Data Collection (2016), S. Levine et al.

  • Asynchronous methods for deep reinforcement learning (2016), V. Mnih et al.

  • Deep Reinforcement Learning with Double Q-Learning (2016), H. Hasselt et al.

  • Mastering the game of Go with deep neural networks and tree search (2016), D. Silver et al.

  • Continuous control with deep reinforcement learning (2015), T. Lillicrap et al.

  • Human-level control through deep reinforcement learning (2015), V. Mnih et al.

  • Deep learning for detecting robotic grasps (2015), I. Lenz et al.

  • Playing atari with deep reinforcement learning (2013), V. Mnih et al.


重要研究者:Sergey Levine, Volodymyr Mnih, David Silver


2016 年其它论文


  • Layer Normalization (2016), J. Ba et al.







请到「今天看啥」查看全文