Connect your moderator Slack workspace to receive post notifications:
Sign in with Slack

Adversarial training and Randomized smoothing

Hello! I have two questions related to the lecture about Adversarial ML.

  1. In the paragraph about adversarial training algorithm to build robust classifier, I didn’t understand precisely what happens when we apply the defined algorithm to the example with one robust and many non-robust features. More precisely, I don’t understand how can the formula change from the optimization based on θ to the one based on a, and then I can’t see why the derivation of the gradient gives the results reported in the notes.
  2. In the last paragraph, about randomized smoothing, it is not very clear to me why, when considering the example of the points in the line, the condition that “all points originally labeled y=-1 are on the right of x on the tail of the gaussian” corresponds to “all points left of x+σ Q^-1 (1-p) are labeled y=1 and all points to the right y=1”.
    Thanks in advance for your answers!
Page 1 of 1

Add comment

Post as Anonymous Dont send out notification