Connect your moderator Slack workspace to receive post notifications:
Sign in with Slack

Question 11 exam 2020

Hello,

For the question 11 it is stated that by flipping the signs of all the weights leading in and out of a hidden neuron, the input-output mapping function represented by the network is unchanged. Is this because the tanh(x) is odd so if we have w1 leading to x and w2 leading out of x we have: -w2(-w1x) = -w2(-(w1x)) = w2(w1x)?
Also, it is stated that interchanging the values of all the weights leave the network unchanged, but this is true for all networks right?
thank you for the clarifications.

Page 1 of 1

Add comment

Post as Anonymous Dont send out notification