专栏名称: 机器学习研究会
机器学习研究会是北京大学大数据与机器学习创新中心旗下的学生组织,旨在构建一个机器学习从事者交流的平台。除了及时分享领域资讯外,协会还会举办各种业界巨头/学术神牛讲座、学术大牛沙龙分享会、real data 创新竞赛等活动。
目录
相关文章推荐
爱可可-爱生活  ·  //@爱可可-爱生活:明日开奖,欢迎参与~- ... ·  22 小时前  
爱可可-爱生活  ·  [CL]《Do Large ... ·  23 小时前  
爱可可-爱生活  ·  //@爱可可-爱生活:欢迎参与~-20241 ... ·  2 天前  
爱可可-爱生活  ·  【LLM资源中心:为大语言模型提供一站式学习 ... ·  2 天前  
黄建同学  ·  玩转AI游戏开发!#ai##科技# ... ·  6 天前  
51好读  ›  专栏  ›  机器学习研究会

【推荐】TensorFlow手把手CNN实践指南

机器学习研究会  · 公众号  · AI  · 2017-08-17 22:26

正文



点击上方“机器学习研究会”可以订阅
摘要
 

转自:爱可可-爱生活

1. Introduction

In the past I have mostly written about ‘classical’ Machine Learning, like Naive Bayes classification, Logistic Regression, and the Perceptron algorithm. In the past year I have also worked with Deep Learning techniques, and I would like to share with you how to make and train a Convolutional Neural Network from scratch, using tensorflow. Later on we can use this knowledge as a building block to make interesting Deep Learning applications.

For this you will need to have tensorflow installed (see installation instructions) and you should also have a basic understanding of Python programming and the theory behind Convolutional Neural Networks. After you have installed tensorflow, you can run the smaller Neural Networks without GPU, but for the deeper networks you will definitely need some GPU power.
The Internet is full with awesome websites and courses which explain how a convolutional neural network works. Some of them have good visualisations which make it easy to understand [click here for more info]. I don’t feel the need to explain the same things again, so before you continue, make sure you understand how a convolutional neural network works. For example,

  • What is a convolutional layer, and what is the filter of this convolutional layer?

  • What is an activation layer (ReLu layer (most widely used), sigmoid activation or tanh)?

  • What is a pooling layer (max pooling / average pooling), dropout?

  • How does Stochastic Gradient Descent work?

 

The contents of this blog-post is as follows:

  1. Tensorflow basics:

  • 1.1 Constants and Variables

  • 1.2 Tensorflow Graphs and Sessions

  • 1.3 Placeholders and feed_dicts

  • Neural Networks in Tensorflow

    • 2.1 Introduction

    • 2.2 Loading in the data

    • 2.3 Creating a (simple) 1-layer Neural Network:

    • 2.4 The many faces of Tensorflow

    • 2.5 Creating the LeNet5 CNN

    • 2.6 How the parameters affect the outputsize of an layer

    • 2.7 Adjusting the LeNet5 architecture

    • 2.8 Impact of Learning Rate and Optimizer

  • Deep Neural Networks in Tensorflow

    • 3.1 AlexNet

    • 3.2 VGG Net-16

    • 3.3 AlexNet Performance

  • Final words

  •  

    1. Tensorflow basics:

    Here I will give a short introduction to Tensorflow for people who have never worked with it before. If you want to start building Neural Networks immediatly, or you are already familiar with Tensorflow you can go ahead and skip to section 2. If you would like to know more about Tensorflow, you can also have a look at this repository, or the notes of lecture 1 and lecture 2 of Stanford’s CS20SI course.


    链接:

    http://ataspinar.com/2017/08/15/building-convolutional-neural-networks-with-tensorflow/


    原文链接:

    https://m.weibo.cn/1402400261/4140930397690270

    “完整内容”请点击【阅读原文】
    ↓↓↓