Connect your moderator Slack workspace to receive post notifications:
Sign in with Slack

adversarial - robustness part of lecture

I have a question regarding this transformation in page 10 of lecture09c_adversarial
x = (x1, · · · , xD), where xi = ai*y + Zi

how do we have access to y?
what exactly are we trying to do?

thank you

Hi,

You have a classification problem: given a \( x\) you want to recover the associated label \( y \).
As usual \(x,y\) follows some distribution \(D\), and in this precise example, D is the following:
First sample uniformly \(y\in \{+1,-1\} \). And then given y, x is sampled according to :

$$ x_1 = y +z_1, \text{ and } x_i = \alpha y + z_i \text{ for } i>1 $$

The question is how to recover the correct label given a new x. In practice you will have a training set and using this training set you will find some classification function. But as a baseline assume that you know the data distribution D, therefore you know that x is of the previous form but you dont know y and you dont see the noises \(z_i\), you only have the value of the component of x to make your decision. You can compute the Bayes classifier which is the best you can hope for (see previous lecture on classification).

What we show in this example is that the standard risk of the Bayes classifier is going to zero when D is going to infinity whereas the adversarial risk is large.

Page 1 of 1

Add comment

Post as Anonymous Dont send out notification