专栏名称: 机器学习研究会
机器学习研究会是北京大学大数据与机器学习创新中心旗下的学生组织,旨在构建一个机器学习从事者交流的平台。除了及时分享领域资讯外,协会还会举办各种业界巨头/学术神牛讲座、学术大牛沙龙分享会、real data 创新竞赛等活动。
目录
相关文章推荐
宝玉xp  ·  Costco ... ·  昨天  
爱可可-爱生活  ·  【[283星]ComfyUI_CatVTON ... ·  昨天  
爱可可-爱生活  ·  【[15.3k星]Teable:新一代无代码 ... ·  2 天前  
量化投资与机器学习  ·  量化人的10年! ·  2 天前  
51好读  ›  专栏  ›  机器学习研究会

【学习】特征选择的现实意义

机器学习研究会  · 公众号  · AI  · 2017-06-13 21:42

正文



点击上方“机器学习研究会”可以订阅哦
摘要
 

转自:爱可可-爱生活

If you wanted to classify animals, for example, based on a plethora of relevant collected data, you would quickly find that all sorts of potential data attributes, or features, were relatively unhelpful for classification. For example, given that most living creatures have precisely 1 heart, this particular feature would not be beneficial, from a learning perspective. On the other hand, an attribute denoting whether or not a given animal is hoofed would likely be a powerful predictor.

Further, using all of these irrelevant attributes, mixed in with the powerful predictors, may actually have a negative effect on the resulting model. This is to say nothing of the increased training times that may come along with the inclusion of useless attributes, or the overfitting which may occur on the training data.

Feature selection is the process of narrowing down a subset of features, or attributes, to be used in the predictive modeling process. Feature selection is useful on a variety of fronts: it is the best weapon against the Curse of Dimensionality; it can reduce overall training times; and it is a powerful defense against overfitting, increasing model generalizability.

Something I read recently -- written so eloquently and concisely by data scientist Rubens Zimbres -- alludes to the importance of feature selection from a practical standpoint:

After some experiences, using stacked neural nets, parallel neural nets, asymmetric configs, simple neural nets, multiple layers, dropouts, activation functions etc there is one conclusion: There's NOTHING like a good Feature Selection.

Having had some previous professional contacts with Rubens Zimbres in the past, I reached out to him for some elaboration. He provided the following:

Feature selection should be one of the main concerns for a Data Scientist. Accuracy and generalization power can be leveraged by a correct feature selection, based in correlation, skewness, t-test, ANOVA, entropy and information gain. 

Many times a correct feature selection allows you to develop simpler and faster Machine Learning models. Consider the picture below (Support Vector Machine classification of the IRIS dataset): on the left side a wrong variable selection is presented. The linear kernel cannot handle the classification task properly, neither the radial basis function kernel. On the right side, petal width and petal length were selected as features and even the linear kernel is quite accurate. A correct variable selection, a good algorithm choice and hyperparameter tuning are the keys to success. Picture below made with Python.



链接:

http://www.kdnuggets.com/2017/06/practical-importance-feature-selection.html


原文链接:

https://m.weibo.cn/1402400261/4118103216008702

“完整内容”请点击【阅读原文】
↓↓↓