### Some questions on 2018 exam

-Q3: What is the idea behind this? I fully understand the previous sub questions but I really don't see how to logic about this one.

-Q23: I disagree with the solution being false. The descriptions fits exactly the case we saw in class (with f being coord-wise Li smooth), and where we used P[i_t = i] = L_i / (Sum of L_i). The result we stated was that in practice it is faster than CD, since the avg. of L_i << L. What am I missing here?

-Q25: Why exactly is the subgradient of |x - P_i(x)| = (x - P_i(x)) / |x - P_i(x)|_2 ?

Top comment

Q3 : i can't help you concerning the rigorous proof behind it, but it has been seen in class that once you are close enough to the solution you gain two digits of precision per iteration
Q23 : it has to be 1/Li instead of Li for the step size
Q25 : $$||x|| = \sqrt{x_1^2 + x_2^2 + ... + x_n^2} \implies \frac{\partial ||x||}{\partial x_i} = \frac{x_i}{\sqrt{x_1^2 + x_2^2 + ... + x_n^2}} \implies \nabla ||x|| = \frac{x}{||x||}$$

Top comment

Q3 : i can't help you concerning the rigorous proof behind it, but it has been seen in class that once you are close enough to the solution you gain two digits of precision per iteration
Q23 : it has to be 1/Li instead of Li for the step size
Q25 : $$||x|| = \sqrt{x_1^2 + x_2^2 + ... + x_n^2} \implies \frac{\partial ||x||}{\partial x_i} = \frac{x_i}{\sqrt{x_1^2 + x_2^2 + ... + x_n^2}} \implies \nabla ||x|| = \frac{x}{||x||}$$

Hi, for Q3, the lecture says "In this last phase, we essentially double the number of correct digits in each iteration". It seems do not mean "gain two digits of precision per iteration". I am confused.

Hi
I do not understand why the answer to Q3 is 16/2. Could someone please clarify?

I don't understand either, once we are close we should double the number of digits at each iteration