专栏名称: 机器学习研究会
机器学习研究会是北京大学大数据与机器学习创新中心旗下的学生组织,旨在构建一个机器学习从事者交流的平台。除了及时分享领域资讯外,协会还会举办各种业界巨头/学术神牛讲座、学术大牛沙龙分享会、real data 创新竞赛等活动。
目录
相关文章推荐
宝玉xp  ·  //@_慎行_:GPT已经极大提升我的工作效 ... ·  2 天前  
黄建同学  ·  如果你很懒,不想弄Claude ... ·  3 天前  
爱可可-爱生活  ·  [CL]《Fine-Tuning ... ·  4 天前  
李继刚  ·  Claude Prompt:苹果文案 ·  5 天前  
李继刚  ·  Claude Prompt:苹果文案 ·  5 天前  
51好读  ›  专栏  ›  机器学习研究会

【推荐】斯坦福课程:深度学习理论(附视频+讲义+阅读材料)

机器学习研究会  · 公众号  · AI  · 2017-11-08 22:59

正文



点击上方“机器学习研究会”可以订阅


摘要
 

转自:爱可可-爱生活

The goal of this course is to review currently available theories for deep learning and encourage better theoretical understanding of deep learning algorithms. 


Lecture slides for STATS385, Fall 2017

Lecture01: Deep Learning Challenge. Is There Theory? (Donoho/Monajemi/Papyan)

Lecture02: Overview of Deep Learning From a Practical Point of View (Donoho/Monajemi/Papyan)

Lecture03: Harmonic Analysis of Deep Convolutional Neural Networks (Helmut Bolcskei)

Lecture04: Convnets from First Principles: Generative Models, Dynamic Programming & EM (Ankit Patel)

Lecture05: When Can Deep Networks Avoid the Curse of Dimensionality and Other Theoretical Puzzles (Tomaso Poggio)

Lecture06: Views of Deep Networksfrom Reproducing Kernel Hilbert Spaces (Zaid Harchaoui)


Lecture 1 – Deep Learning Challenge. Is There Theory?

Readings

  1. Deep Deep Trouble

  2. Why 2016 is The Global Tipping Point...

  3. Are AI and ML Killing Analyticals...

  4. The Dark Secret at The Heart of AI

  5. AI Robots Learning Racism...

  6. FaceApp Forced to Pull ‘Racist' Filters...

  7. Losing a Whole Generation of Young Men to Video Games

Lecture 2 – Overview of Deep Learning From a Practical Point of View

Readings

  1. Emergence of simple cell

  2. ImageNet Classification with Deep Convolutional Neural Networks (Alexnet)

  3. Very Deep Convolutional Networks for Large-Scale Image Recognition (VGG)

  4. Going Deeper with Convolutions (GoogLeNet)

  5. Deep Residual Learning for Image Recognition (ResNet)

  6. Batch Normalization: Accelerating Deep Network Training by Reducing Internal Covariate Shift

  7. Visualizing and Understanding Convolutional Neural Networks

Blogs

  1. An Intuitive Guide to Deep Network Architectures

  2. Neural Network Architectures

Videos

  1. Deep Visualization Toolbox

Lecture 3

Readings

  1. A Mathematical Theory of Deep Convolutional Neural Networks for Feature Extraction

  2. Energy Propagation in Deep Convolutional Neural Networks

  3. Discrete Deep Feature Extraction: A Theory and New Architectures

  4. Topology Reduction in Deep Convolutional Feature Extraction Networks

Lecture 4

Readings

  1. A Probabilistic Framework for Deep Learning

  2. Semi-Supervised Learning with the Deep Rendering Mixture Model

  3. A Probabilistic Theory of Deep Learning

Lecture 5

Readings

  1. Why and When Can Deep-but Not Shallow-networks Avoid the Curse of Dimensionality: A Review

  2. Learning Functions: When is Deep Better Than Shallow

Lecture 6

Readings

  1. Convolutional Patch Representations for Image Retrieval: an Unsupervised Approach

  2. Convolutional Kernel Networks

  3. Kernel Descriptors for Visual Recognition

  4. End-to-End Kernel Learning with Supervised Convolutional Kernel Networks

  5. Learning with Kernels

  6. Kernel Based Methods for Hypothesis Testing

Lecture 7

Readings

  1. Geometry of Neural Network Loss Surfaces via Random Matrix Theory

  2. Resurrecting the sigmoid in deep learning through dynamical isometry: theory and practice

  3. Nonlinear random matrix theory for deep learning

Lecture 8

Readings

  1. Deep Learning without Poor Local Minima

  2. Topology and Geometry of Half-Rectified Network Optimization

  3. Convexified Convolutional Neural Networks

  4. Implicit Regularization in Matrix Factorization

Lecture 9

Readings

  1. Neocognitron: A self-organizing neural network model for a mechanism of pattern recognition unaffected by shift in position

  2. Perception as an inference problem

  3. A Neurobiological Model of Visual Attention and Invariant Pattern Recognition Based on Dynamic Routing of Information

Lecture 10

Readings

  1. Working Locally Thinking Globally: Theoretical Guarantees for Convolutional Sparse Coding

  2. Convolutional Neural Networks Analyzed via Convolutional Sparse Coding

  3. Multi-Layer Convolutional Sparse Modeling: Pursuit and Dictionary Learning

  4. Convolutional Dictionary Learning via Local Processing

To be discussed and extra

  • Emergence of simple cell by Olshausen and Field

  • Auto-Encoding Variational Bayes by Kingma and Welling

  • Generative Adversarial Networks by Goodfellow et al.

  • Understanding Deep Learning Requires Rethinking Generalization by Zhang et al.

  • Deep Neural Networks with Random Gaussian Weights: A Universal Classification Strategy? by Giryes et al.

  • Robust Large Margin Deep Neural Networks by Sokolic et al.

  • Tradeoffs between Convergence Speed and Reconstruction Accuracy in Inverse Problems by Giryes et al.

  • Understanding Trainable Sparse Coding via Matrix Factorization by Moreau and Bruna

  • Why are Deep Nets Reversible: A Simple Theory, With Implications for Training by Arora et al.

  • Stable Recovery of the Factors From a Deep Matrix Product and Application to Convolutional Network by Malgouyres and Landsberg

  • Optimal Approximation with Sparse Deep Neural Networks by Bolcskei et al.

  • Convolutional Rectifier Networks as Generalized Tensor Decompositions by Cohen and Shashua


课程主页:

https://stats385.github.io/


视频连接:

https://www.researchgate.net/project/Theories-of-Deep-Learning


原文链接:

https://m.weibo.cn/1402400261/4171693540736036

“完整内容”请点击【阅读原文】
↓↓↓