Content:This post introduces eigenvectors and their relationship to matrices
in plain language and without a great deal of math. It builds on those
ideas to explain covariance, principal component analysis, and
information entropy.
The eigen in eigenvector comes from German, and it means
something like “very own.” For example, in German, “mein eigenes Auto”
means “my very own car.” So eigen denotes a special relationship between
two things. Something particular, characteristic and definitive. This
car, or this vector, is mine and not someone else’s.
Matrices, in linear algebra, are simply rectangular arrays of
numbers, a collection of scalar values between brackets, like a
spreadsheet. All square matrices (e.g. 2 x 2 or 3 x 3) have
eigenvectors, and they have a very special relationship with them, a bit
like Germans have with their cars.
Linear Transformations
We’ll define that relationship after a brief detour into what matrices do, and how they relate to other numbers.
Matrices are useful because you can do things with them like add and multiply. If you multiply a vector v by a matrix A, you get another vector b, and you could say that the matrix performed a linear transformation on the input vector.
Av = b
It maps one vector v to another, b.
We’ll illustrate with a concrete example. (You can see how this type of matrix multiply, called a dot product, is performed here.)
So A turned v into b. In the graph below, we see how the matrix mapped the short, low line v, to the long, high one, b.
You could feed one positive vector after another into matrix A, and
each would be projected onto a new space that stretches higher and
farther to the right.
Imagine that all the input vectors v live in a normal grid, like this:
链接:
https://deeplearning4j.org/eigenvector
原文链接:
http://weibo.com/1402400261/F8DT38XeG?ref=home&rid=12_0_202_2670209205668571931&type=comment#_rnd1497950870229