Connect your moderator Slack workspace to receive post notifications:
Sign in with Slack

Representation power

Dear TAs,

I'm struggling to understand why on a bounded domain, neural nets can not approximate any "sufficiently smooth" function pointwise.

In the lectures, we saw an example in which a function \(f(x)\) could be approximated in \(L_{\infty}\) by continuous piecewise-linear functions and the terms in

$$ q(x)=\tilde{a}_{1} x+\tilde{b}_{1}+\sum_{i=2}^{m} \tilde{a}_{i}\left(x-\tilde{b}_{i}\right)_{+} $$

And then represent the terms with hidden nodes and their biases.

In summary, I do not understand why the answer to the question below is c and not b.

nn.jpg'

Thank you in advance! :)

Wondering the same thing especially given that both average and pointwise are possible (depending on the activation function but no precision is given on that matter in c)), why is the c) correct whereas it says "but not pointwise" ?

I believe they wrote c) by accident, it seems like they are referring to b) in their solution...

Page 1 of 1

Add comment

Post as Anonymous Dont send out notification