In the definition of the linear minimization oracle of Frank Wolfe algorithm, it makes sense that s, the projection on the border of the convex set X is the one that minimizes the dot product with the gradient only if our referential is centered on x: indeed, we are comparing a "shift" (the gradient) with an absolute position (the new position s): I don't quite understand when we defined this referential to have this property, and am therefore not sure about my reasoning..

Intuitively, I would have defined the LMO as follows, to get rid of the dependence on the referential

\(LMO(g,x) : = x + argmin_{\delta : x + \delta \in X} < \delta,g>\)

no, actually s is not the projection onto the set. s (so the LMO) is only the linear minimizer. you're right that the LMO does not depend on the reference point x, the current iterate. it only depends on the set, and the direction we're asking, so g.

## Frank wolfe referential

Hello,

In the definition of the linear minimization oracle of Frank Wolfe algorithm, it makes sense that s, the projection on the border of the convex set X is the one that minimizes the dot product with the gradient only if our referential is centered on x: indeed, we are comparing a "shift" (the gradient) with an absolute position (the new position s): I don't quite understand when we defined this referential to have this property, and am therefore not sure about my reasoning..

Intuitively, I would have defined the LMO as follows, to get rid of the dependence on the referential

\(LMO(g,x) : = x + argmin_{\delta : x + \delta \in X} < \delta,g>\)

Any help is greatly appreciated!

Yann Mentha

no, actually s is not the projection onto the set. s (so the LMO) is only the linear minimizer. you're right that the LMO does not depend on the reference point x, the current iterate. it only depends on the set, and the direction we're asking, so g.

## Add comment