专栏名称: 机器学习研究会
机器学习研究会是北京大学大数据与机器学习创新中心旗下的学生组织,旨在构建一个机器学习从事者交流的平台。除了及时分享领域资讯外,协会还会举办各种业界巨头/学术神牛讲座、学术大牛沙龙分享会、real data 创新竞赛等活动。
目录
相关文章推荐
新智元  ·  Grok ... ·  昨天  
爱可可-爱生活  ·  【LUCY:一款专注于语言理解和控制的AI项 ... ·  昨天  
爱可可-爱生活  ·  【Savanna:为卷积多混合模型(Stri ... ·  3 天前  
爱可可-爱生活  ·  【HunyuanVideo-Training ... ·  3 天前  
AI前线  ·  微软力推新视频游戏 AI 模型,超 10 ... ·  3 天前  
51好读  ›  专栏  ›  机器学习研究会

【推荐】微软亚洲研究院的刘铁岩等人在AAAI 2017上做的有关优化以及大规模机器学习的Tutorial

机器学习研究会  · 公众号  · AI  · 2017-02-13 18:54

正文


点击上方 “机器学习研究会” 可以订阅哦
摘要

转自: 洪亮劼

微软亚洲研究院的刘铁岩等人近日在AAAI 2017上做的有关优化以及大规模机器学习的Tutorial,"Recent Advances in Distributed Machine Learning",很值得一看。里面对传统的优化算法,特别是一些理论特性以及分布式算法的相应理论特性都有一个比较详尽的总结。非常适合想快速了解这些领域的学者和工程师。另外,这个Tutorial还介绍了DMTK的一些情况,作为一个分布式计算平台的优缺点,还顺带比较了Spark和TensorFlow等流行框架。


摘要:

In recent years, artificial intelligence has demonstrated its power in many important applications. Besides the novel machine learning algorithms (for example, deep neural networks), their distributed implementations play a very critical role in these successes. In this tutorial, we will first review popular machine learning models and their corresponding optimization techniques. Second, we will introduce different ways of parallelizing machine learning algorithms, that is, data parallelism, model parallelism, synchronous parallelism, asynchronous parallelism, and so on, and discuss their theoretical properties, advantages, and limitations. Third, we will discuss some recent research works that try to overcome the limitations of standard parallelization mechanisms, including advanced asynchronous parallelism and new communication and aggregation methods. Finally, we will introduce how to leverage popular distributed machine learning platforms, such as Spark MlLib, DMTK, Tensorflow, to parallelize a given machine learning algorithm, in order to give the audience some practical guidelines on this topic.







请到「今天看啥」查看全文