Connect your moderator Slack workspace to receive post notifications:
Sign in with Slack

Training risk

In final exam 2019, true/false question 15, it is said that training risk converges to true risk, when |S| goes to infinity.
From what I have understood from the lectures, the bound given in generalization error is for test risk, because there could be an overfitting if we use training data for validation. So, how do we say that, training risk converges to the true risk?

Hey, I think the point is that using a larger data set is a form of regularization and so you cannot overfit the data (if \( |S| \rightarrow \infty \)), rather you will converge to the 'true' model and your error will consist of the zero mean noise which we assumed was a component of the data model itself. In other words if you have a dataset whose size increases indefinitely, you will not be able to fit all the points. I could be wrong but I think this makes sense, let me know what you think :)

Yeah, this sounds a good explanation.
Thanks.

Page 1 of 1

Add comment

Post as Anonymous Dont send out notification