Recommendation engines are all the rage. From Netflix to Amazon, all of the big guys have been pushing the envelope with research initiatives focused on making better recommendations for users. For years, most research appeared through academic papers or books that neatly organized these papers into their respective techniques (e.g. collaborative filtering, content filtering, etc.) to make them easier to digest. There have actually been very few pure text books on the subject given it is a fairly new research area.
In 2016, Charu Aggarwal published Recommender Systems: The Textbook, a massively detailed walkthrough of recommendation systems from the basics all the way to where research is at today. I highly recommend it to anyone interested in recommendation systems, whether you are doing research or just want to gain some intuition, his explanations are fantastic!
In chapter 3 of his book, Aggarwal discusses model-based collaborative filtering, which includes several methods of modelling the classic user-item matrix to make recommendations. One focus of the chapter is on matrix factorization techniques that have become so popular in recent years. While introducing unconstrained matrix factorization, he remarks the following:
Much of the recommendation literature refers to unconstrained matrix factorization as singular value decomposition (SVD). Strictly speaking, this is technically incorrect; in SVD, the columns of U U" role="presentation" style="box-sizing: border-box; display: inline; line-height: normal; word-spacing: normal; word-wrap: normal; float: none; direction: ltr; max-width: none; max-height: none; min-width: 0px; min-height: 0px; border-width: 0px; border-style: initial; border-color: initial;">and V must be orthogonal. However, the use of the term “SVD” to refer to unconstrained matrix factorization is rather widespread in the recommendation literature, which causes some confusion to practitioners from outside the field.
Aggarwal - Section 3.6.4 of Recommender Systems (2016)
Before getting into more details about the inconsistency remarked by Aggarwal, let's go over what singular value decomposition (SVD) is and what plain old matrix factorization is.
Matrix factorizations all perform the same task but in different ways. They all take a matrix and break it down into some product of smaller matrices (its factors). It's very similar to how we did factoring in elementary school. We took a big number like l2 and broke it down into its factors (1,12), (2,6), (3,4) where each pair yields the number l2 when multiplied together. Factorizing matrices is exactly the same but since we are breaking something like a matrix that is inherently more complex, there are many, many ways to perform this break down. Check outWikipedia for all the different examples.
Visualization of the SVD of a two-dimensional, real shearing matrix M.
How you factor a matrix basically comes down to what constraints you put on the factors (the matrices that when multiplied together form the original). Do you want just 2 factors? Do you want more? Do you want them to have particular characteristics like orthogonality? Do you want their eigenvectors to have any specific things?
链接:
http://blog.yhat.com/posts/singular-value-decomposition.html
原文链接:
http://weibo.com/1402400261/ED2NCs3Y2?from=page_1005051402400261_profile&wvr=6&mod=weibotime&type=comment#_rnd1491390088927