专栏名称: 机器学习研究会
机器学习研究会是北京大学大数据与机器学习创新中心旗下的学生组织,旨在构建一个机器学习从事者交流的平台。除了及时分享领域资讯外,协会还会举办各种业界巨头/学术神牛讲座、学术大牛沙龙分享会、real data 创新竞赛等活动。
目录
相关文章推荐
爱可可-爱生活  ·  【探索OpenAI ... ·  4 天前  
爱可可-爱生活  ·  几篇论文实现代码:《VAT-Mart: ... ·  4 天前  
爱可可-爱生活  ·  [AS]《Zero-shot ... ·  4 天前  
宝玉xp  ·  挤牙膏了//@明风:大模型界的汪峰 ... ·  4 天前  
宝玉xp  ·  可汗学院在 GPT Store 上的 ... ·  5 天前  
51好读  ›  专栏  ›  机器学习研究会

【推荐】AAAI 2017的Tutorial——讲述了深度学习框架的设计思想和实现,比较若干种流行框架的性能和异同

机器学习研究会  · 公众号  · AI  · 2017-02-06 18:11

正文


点击上方“机器学习研究会”可以订阅哦
摘要
 

转自:星空下的巫师

"Deep Learning Implementations and Frameworks (DLIF)",AAAI 2017的Tutorial,专门讲述了深度学习框架的设计思想和实现,比较若干种流行框架(Caffe、MXNet、TensorFlow、Chainer等)的性能和异同。


About this tutorial

This tutorial explains general knowledge on design principles for deep learning frameworks, with a goal of providing a guideline to choose a suitable framework for researchers and practitioners of AI who want to utilize deep learning for their own tasks.

    Today, software frameworks for deep learning, such as TensorFlow and Caffe, are widely employed in many deep learning systems to accelerate the speed of research and development. Deep learning plays a fundamental role in core technologies of AI including image/audio recognition, planning, and natural language processing and is utilized as building blocks for developing AI systems such as robots, games, question answering, and medical diagnosis. Frameworks hide low-level implementation details and provide a systematic way to implementation.

    At a higher level, the design of deep learning models is, in essence, to combine components and heuristics. Technical elements for deep learning are often common and reusable. For example, a typical deep learning architecture for image recognition is a layered stack of convolutional and pooling operations. Many hacks including dropout and batch normalization are commonly employed to enhance generalization performance. A great deal of the implementation can be reduced to finding good combinations of components like convolution and pooling and heuristics like dropout and batch normalization. This is why we use deep learning frameworks for efficient coding.

    Choosing an appropriate deep learning framework requires proper knowledge on the fundamental design principles of frameworks. The variety of many available frameworks confuses users to select the most suitable one. In addition to reusability, competing demands in speed, scalability, code simplicity, easy debugging, and community size pose further difficulty. Choosing a suboptimal framework may result in degraded efficiency in research and development, damage the utility of work, and lead to diminished popularity.

    Given the recent development on deep learning outside simple pattern recognition tasks, this tutorial will provide useful technical information to general AI applications.


链接:

https://sites.google.com/site/dliftutorial/


原文链接:

http://weibo.com/1785748853/EubcVFp8g?type=comment#_rnd1486369992274

“完整内容”请点击【阅读原文】
↓↓↓