Connect your moderator Slack workspace to receive post notifications:
Sign in with Slack

SVM soft margin

I was wondering if the penalization for being on the wrong side of the margin (slack variables cost) has very little penalty cost is it possible that when we apply a soft margin SVM to a linearly separable problem the margins would get larger than in the hard margin SVM case and possibly misclassify training points?

Thank you for your help

Thank you for this very interesting question.

If the penalization for being on the wrong side of the margin (slack variables cost) is too small, meaning that the regularization parameter \(\lambda\) we have seen in the course is too large then you will really try to minimize the norm of w and therefore to maximize the margin, to the detriment of separating well your data.

In the two extremes you see that if \( \lambda = 0\) then you just want to separate your data and if \( \lambda = \infty\) they you don't care anymore of your data and you just want to minimize the norm of w (you will then output \(w=0\)).

That is why in practice you will have to do a cross validation to pick the right value for \(\lambda\).

On the contrary, for the hard margin svm where you want to maximize the margin under the constraint that your data are well separated, you will not have the same problem.

Do not hesitate to come to the Q&A if you want further details.

Best,
Nicolas

Page 1 of 1

Add comment

Post as Anonymous Dont send out notification