Connect your moderator Slack workspace to receive post notifications:
Sign in with Slack

Backpropagation NN

Hello, is it correct that in general a neural network trained using backpropagation is non convex (so not necessary a global optimum is reached)? Thanks for your help

Top comment

Backpropagation is not an optimization method, so we can't technically say that an NN is trained using backpropagation. Backpropagation is a technique to compute gradients efficiently in NNs, gradients that will be used by optimization algs to train the NN.
As for your question, yes NNs are not convex in general and thus using gradient based optimization methods such as SGD and its variants will not lead to convergence to a global optimum. However one open problem about NNs trained using SGD is that the models found seem to generalize well even in high dimensional regimes where they are expected to overfit.

Page 1 of 1

Add comment

Post as Anonymous Dont send out notification