专栏名称: 机器学习研究会
机器学习研究会是北京大学大数据与机器学习创新中心旗下的学生组织,旨在构建一个机器学习从事者交流的平台。除了及时分享领域资讯外,协会还会举办各种业界巨头/学术神牛讲座、学术大牛沙龙分享会、real data 创新竞赛等活动。
目录
相关文章推荐
爱可可-爱生活  ·  【LifeGPT:一款能模拟复杂动态系统的算 ... ·  3 天前  
爱可可-爱生活  ·  【Copilot ... ·  3 天前  
硅星GenAI  ·  AI周榜 | ... ·  4 天前  
硅星GenAI  ·  AI周榜 | ... ·  4 天前  
量化投资与机器学习  ·  LeetCode刷题哪家强?揭秘大厂面试背后的秘密 ·  5 天前  
51好读  ›  专栏  ›  机器学习研究会

【推荐】特征向量/PCA/协方差/熵入门指南

机器学习研究会  · 公众号  · AI  · 2017-06-20 22:15

正文



点击上方“机器学习研究会”可以订阅哦
摘要
 

转自:爱可可-爱生活

Content:
  • Linear Transformations

  • Principal Component Analysis (PCA)

  • Covariance Matrix

  • Change of Basis

  • Entropy & Information Gain

  • Just Give Me the Code

  • Resources

This post introduces eigenvectors and their relationship to matrices in plain language and without a great deal of math. It builds on those ideas to explain covariance, principal component analysis, and information entropy.


The eigen in eigenvector comes from German, and it means something like “very own.” For example, in German, “mein eigenes Auto” means “my very own car.” So eigen denotes a special relationship between two things. Something particular, characteristic and definitive. This car, or this vector, is mine and not someone else’s.


Matrices, in linear algebra, are simply rectangular arrays of numbers, a collection of scalar values between brackets, like a spreadsheet. All square matrices (e.g. 2 x 2 or 3 x 3) have eigenvectors, and they have a very special relationship with them, a bit like Germans have with their cars.


Linear Transformations

We’ll define that relationship after a brief detour into what matrices do, and how they relate to other numbers.

Matrices are useful because you can do things with them like add and multiply. If you multiply a vector v by a matrix A, you get another vector b, and you could say that the matrix performed a linear transformation on the input vector.

Av = b

It maps one vector v to another, b.

We’ll illustrate with a concrete example. (You can see how this type of matrix multiply, called a dot product, is performed here.)

So A turned v into b. In the graph below, we see how the matrix mapped the short, low line v, to the long, high one, b.

You could feed one positive vector after another into matrix A, and each would be projected onto a new space that stretches higher and farther to the right.

Imagine that all the input vectors v live in a normal grid, like this:


链接:

https://deeplearning4j.org/eigenvector


原文链接:

http://weibo.com/1402400261/F8DT38XeG?ref=home&rid=12_0_202_2670209205668571931&type=comment#_rnd1497950870229

“完整内容”请点击【阅读原文】
↓↓↓