专栏名称: 机器学习研究会
机器学习研究会是北京大学大数据与机器学习创新中心旗下的学生组织,旨在构建一个机器学习从事者交流的平台。除了及时分享领域资讯外,协会还会举办各种业界巨头/学术神牛讲座、学术大牛沙龙分享会、real data 创新竞赛等活动。
目录
相关文章推荐
爱可可-爱生活  ·  恭喜@陌风小同学 ... ·  2 天前  
宝玉xp  ·  #AI开源项目推荐# Claude ... ·  5 天前  
51好读  ›  专栏  ›  机器学习研究会

【推荐】Kaggle大牛讲解Gradient Boosting基础

机器学习研究会  · 公众号  · AI  · 2017-01-24 19:51

正文


点击上方“机器学习研究会”可以订阅哦
摘要
 

转自:爱可可-爱生活

If linear regression was a Toyota Camry, then gradient boosting would be a UH-60 Blackhawk Helicopter.  A particular implementation of gradient boosting, XGBoost, is consistently used to win machine learning competitions on Kaggle. Unfortunately many practitioners (including my former self) use it as a black box.  It’s also been butchered to death by a host of drive-by data scientists’ blogs.  As such, the purpose of this article is to lay the groundwork for classical gradient boosting, intuitively and comprehensively.


Motivation

We’ll start with a simple example.  We want to predict a person’s age based on whether they play video games, enjoy gardening, and their preference on wearing hats.  Our objective is to minimize squared error.  We have these nine training samples to build our model.


链接:

http://blog.kaggle.com/2017/01/23/a-kaggle-master-explains-gradient-boosting/


原文链接:

http://weibo.com/1402400261/EsfeHFE5r?from=page_1005051402400261_profile&wvr=6&mod=weibotime&type=comment#_rnd1485255724349

“完整内容”请点击【阅读原文】
↓↓↓