专栏名称: 机器学习研究会
机器学习研究会是北京大学大数据与机器学习创新中心旗下的学生组织,旨在构建一个机器学习从事者交流的平台。除了及时分享领域资讯外,协会还会举办各种业界巨头/学术神牛讲座、学术大牛沙龙分享会、real data 创新竞赛等活动。
目录
相关文章推荐
爱可可-爱生活  ·  [CL]《Counterfactual ... ·  5 天前  
爱可可-爱生活  ·  今日推介(第1593期):通过通用神经符号回 ... ·  5 天前  
中国人工智能学会  ·  聚智促产 ... ·  6 天前  
爱可可-爱生活  ·  [IR]《Language-Model ... ·  1 周前  
宝玉xp  ·  👍//@赏味不足://@王座法庭鏟屎官:// ... ·  1 周前  
51好读  ›  专栏  ›  机器学习研究会

【推荐】Apache Spark/Keras分布式深度学习

机器学习研究会  · 公众号  · AI  · 2017-01-26 23:17

正文


点击上方“机器学习研究会”可以订阅哦
摘要
 

转自:爱可可-爱生活

In the following blog posts we study the topic of Distributed Deep Learning, or rather, how to parallelize gradient descent using data parallel methods. We start by laying out the theory, while supplying you with some intuition into the techniques we applied. At the end of this blog post, we conduct some experiments to evaluate how different optimization schemes perform in identical situations. We also introduce dist-keras(link is external), which is our distributed deep learning framework built on top of Apache Spark(link is external) and Keras(link is external). For this, we provide several notebooks and examples(link is external). This framework is mainly used to test our distributed optimization schemes, however, it also has several practical applications at CERN, not only because of the distributed learning, but also for model serving purposes. For example, we provide several examples(link is external) which show you how to integrate this framework with Spark Streaming and Apache Kafka. Finally, these series will contain parts of my master-thesis research. As a result, they will mainly show my research progress. However, some might find some of the approaches I present here useful to apply in their own work.


链接:

https://db-blog.web.cern.ch/blog/joeri-hermans/2017-01-distributed-deep-learning-apache-spark-and-keras


原文链接:

http://weibo.com/1402400261/EsD9yqr4m?from=page_1005051402400261_profile&wvr=6&mod=weibotime&type=comment#_rnd1485443560630

“完整内容”请点击【阅读原文】
↓↓↓