Connect your moderator Slack workspace to receive post notifications:
Sign in with Slack

How to compute U and S efficiently?

I would like some help to understand this part:

$$ \mathbf{U S}_{D}^{2} \mathbf{U}^{\top} \mathbf{u}_{j}=s_{j}^{2} \mathbf{u}_{j} $$

Is there a unitary matrix property or a step missing? Or why is:

$$ \mathbf{U S}_{D}^{2} \mathbf{U}^{\top}=s_{j}^{2} $$

In addition, could you explain more in detail how to compute U and S with the eigenvalues and vectors of

$$ \mathrm{XX}^{\top}. $$

Are the eigenvalues of

$$ \mathrm{XX}^{\top} $$

S and the eigenvectors U?

What is the difference between PCA and truncated SVD?

@Anonymous said:
I would like some help to understand this part:

$$ \mathbf{U S}_{D}^{2} \mathbf{U}^{\top} \mathbf{u}_{j}=s_{j}^{2} \mathbf{u}_{j} $$

Is there a unitary matrix property or a step missing?

Correct. \(\mathbf{u}_{j}\) is a column of an orthogonal matrix \(\mathbf{U}\), and hence the product \(\mathbf{U}^{\top} \mathbf{u}_{j}\) is only nonzero at position \(j\).

In addition, could you explain more in detail how to compute U and S with the eigenvalues and vectors of

$$ \mathrm{XX}^{\top}. $$

Are the eigenvalues of

$$ \mathrm{XX}^{\top} $$

S and the eigenvectors U?

Correct. Using the SVD of \(\mathrm{X}\) you can write a valid eigenvalue/vector decomposition of \(\mathrm{XX}^{\top}\):

$$\mathbf{X X}^{\top}=\mathbf{U S S}^{\top} \mathbf{U}^{\top}=\mathbf{U S}_{D}^{2} \mathbf{U}^{\top}$$

What is the difference between PCA and truncated SVD?

They are very related: PCA(nalysis) is useful when analyzing data. Next to the eigendecomposition of the data covariance matrix, (truncated) SVD is another possible algorithm for computing the principal components of P(rincipal)CA. Actually, in most cases, (truncated) SVD is the preferred algorithm for PCA (can handle sparse matrices, has a truncated form, ...)

Page 1 of 1

Add comment

Post as Anonymous Dont send out notification