Er Raqabi El Mehdi
2 min readNov 28, 2020

--

Hello Greg,

Please let me try to tune further the feedback on your interesting points.

- The lower bound for a minimization scenario is just an example to highlight that sometimes we can get some information which is not enough. There are many other initiatives in non-convex optimization including convexification (making a non-convex function convex), approximation, reducing the search space, relaxation, etc. But in many of them, we don't know the destination and we don't know how to reach it. You may dig deeper into them if you want.

- Exactly, they have properties that allows reaching the global optimum differently. Just to recapitulate, saying "little art" means that there are already prepared formulas to apply for least-squares. It is kind of mechanical computations with not so much innovation in it over the years so far.

- That's just one insight I got while thinking and learning about the convex and the non-convex cases. Why would one not expect that at the first place? :-) For convex optimization, there are many methods. The choice of the suitable one is based on time execution, i.e. the one that will lead us quickly to the optimum. In non-convex optimization, there is a room for many (intriguing) classes of functions.

Thank you so much for the feedback. I would totally agree if this was not a blog. Of course, since I target a general audience, you may expect less rigor. Otherwise, too much details makes the article very heavy for someone with a different background. As we definitely agree, rigor is more crucial when it comes to peer reviewed journals. It is not the case here. My insights are not even claims because claims require proofs.

Best Regards.

Mehdi.

--

--

Er Raqabi El Mehdi
Er Raqabi El Mehdi

Written by Er Raqabi El Mehdi

Insights are my Passion. Research is my Vision. Kaizen is my Mission.

Responses (1)