Connect your moderator Slack workspace to receive post notifications:
Sign in with Slack

Tutorial 5 learning_by_penalized_gradient

Hello,
In the correction of the tutorial 5, for the method learning_by_penalized_gradient, I feel like you are returning the loss and the weight vector w, but the loss does not correspond to the actual w. Indeed, you first compute the loss with the old w, then you compute the new w, but you do not compute again the loss which corresponds to the new w. Do you see what I mean ?

"
loss, gradient = penalized_logisticregression(y, tx, w, lambda) # loss for old w
w -= gamma * gradient # compute new w
return loss, w # old loss and new w
"

Yeah, it is a bit ambiguous here. In the exercise, we intended to reuse the loss from penalized_logisticregression. For project 1, the functions only output the loss at the last iteration so the difference is very small so you don't need to worry about it.

Page 1 of 1

Add comment

Post as Anonymous Dont send out notification