Hey, is ALS and iterative algorithm or not? if i understand correctly, it is not, but I am not sure because if it is not iterative algorithm, how can we compare it with SGD step? we need to compare it with the whole SGD until it stops, right or am i missing out something?
Thanks,
Hi, it is iterative, since you first take the derivative wrt w, and pretend z is fixed, so you get a closed form solution. This closed form w_pseudo_optimal you plug into your costfunction and derive it wrt to z now, with w to be fixed.
So you "alternate the least square solution finding for both variables z and w".
Since the zs and ws are not the "real optimal ones" - ie the problem is not convex, you run it until the converge hopefully to some local minimum.
Due to the inversion step of matrices within the LS procedure, it is expensive per step - so ie N^3 of the size of the inverted matrix, but hopefully you make a larger step towards the local minium than with the SGD method.
Hi, it is iterative, since you first take the derivative wrt w, and pretend z is fixed, so you get a closed form solution. This closed form w_pseudo_optimal you plug into your costfunction and derive it wrt to z now, with w to be fixed.
So you "alternate the least square solution finding for both variables z and w".
Since the zs and ws are not the "real optimal ones" - ie the problem is not convex, you run it until the converge hopefully to some local minimum.
Due to the inversion step of matrices within the LS procedure, it is expensive per step - so ie N^3 of the size of the inverted matrix, but hopefully you make a larger step towards the local minium than with the SGD method.
ALS
Hey, is ALS and iterative algorithm or not? if i understand correctly, it is not, but I am not sure because if it is not iterative algorithm, how can we compare it with SGD step? we need to compare it with the whole SGD until it stops, right or am i missing out something?
Thanks,
Hi, it is iterative, since you first take the derivative wrt w, and pretend z is fixed, so you get a closed form solution. This closed form w_pseudo_optimal you plug into your costfunction and derive it wrt to z now, with w to be fixed.
So you "alternate the least square solution finding for both variables z and w".
Since the zs and ws are not the "real optimal ones" - ie the problem is not convex, you run it until the converge hopefully to some local minimum.
Due to the inversion step of matrices within the LS procedure, it is expensive per step - so ie N^3 of the size of the inverted matrix, but hopefully you make a larger step towards the local minium than with the SGD method.
Hope that helps! :)
https://www.oknoname.com/EPFL/CS433/topic/2857/q17-exam-2016/
1
Hi, it is iterative, since you first take the derivative wrt w, and pretend z is fixed, so you get a closed form solution. This closed form w_pseudo_optimal you plug into your costfunction and derive it wrt to z now, with w to be fixed.
So you "alternate the least square solution finding for both variables z and w".
Since the zs and ws are not the "real optimal ones" - ie the problem is not convex, you run it until the converge hopefully to some local minimum.
Due to the inversion step of matrices within the LS procedure, it is expensive per step - so ie N^3 of the size of the inverted matrix, but hopefully you make a larger step towards the local minium than with the SGD method.
Hope that helps! :)
Add comment