Hello! I have two questions related to the lecture about Adversarial ML.
In the paragraph about adversarial training algorithm to build robust classifier, I didn’t understand precisely what happens when we apply the defined algorithm to the example with one robust and many non-robust features. More precisely, I don’t understand how can the formula change from the optimization based on θ to the one based on a, and then I can’t see why the derivation of the gradient gives the results reported in the notes.
In the last paragraph, about randomized smoothing, it is not very clear to me why, when considering the example of the points in the line, the condition that “all points originally labeled y=-1 are on the right of x on the tail of the gaussian” corresponds to “all points left of x+σ Q^-1 (1-p) are labeled y=1 and all points to the right y=1”.
Thanks in advance for your answers!
Adversarial training and Randomized smoothing
Hello! I have two questions related to the lecture about Adversarial ML.
Thanks in advance for your answers!
3
Add comment