X.T @ X has dimension NN whereas the other has (N+1)(N+1). I think the solution should be that X @ X.T and X~ @ X~.T are the same, thus they have the same eigendecomposition. In the case of X~ @ X~.T, S~ has an additionnal row. In order to keep the same value, this row should be equal to 0. Then we can deduce V from U.
For this answer, instead of blaming the prof doing his best to minimize the loss function L(grades) = 1/N sum_n = 1 to N (Indicator(grade_n < 4)), could it be caused by a bad setting of the parameters or the presence of outliers ? Perhaps we should also preserve the distribution in the training set and the testing set and normalize the data. Maybe we also need to use a feature expansion.
2017 Q24 possible error + Q25
Hi,
X.T @ X has dimension NN whereas the other has (N+1)(N+1). I think the solution should be that X @ X.T and X~ @ X~.T are the same, thus they have the same eigendecomposition. In the case of X~ @ X~.T, S~ has an additionnal row. In order to keep the same value, this row should be equal to 0. Then we can deduce V from U.
For this answer, instead of blaming the prof doing his best to minimize the loss function L(grades) = 1/N sum_n = 1 to N (Indicator(grade_n < 4)), could it be caused by a bad setting of the parameters or the presence of outliers ? Perhaps we should also preserve the distribution in the training set and the testing set and normalize the data. Maybe we also need to use a feature expansion.
1
Not sure if this this helps
Add comment