Where here we ask what happens to the infima and sets of minimisers of sequences of functions ? under what conditions do these converge? what is an appropriate notion of convergence for functions which transfers the convergence to the corresponding sequence of its minima and minimizers? This poses a question of continuity for the infimum (as an operator) as well as the set of minimisers (as a multi-valued operator). We aim at characterising the continuity of these operators. Continue reading →
A while ago I posted this article on how to project on the epigraph of a convex function where I derived the optimality conditions and the KKT conditions. This post comes as an addendum proving a third way to project on an epigraph. Do read the previous article first because I use the same notation here. Continue reading →
In this post we discuss the correspondence between the Lagrangian and the Fenchelian duality frameworks and we trace their common origin to the concept of convex conjugate functions and perturbed optimization problems. Continue reading →
Here we study the problem of projecting onto the epigraph of a convex continuous function. Unlike the computation of the proximal operator of a function or the projection on its sublevel sets, the projection onto epigraphs is more complex and there exist only a few functions for which semi-explicit formulas are available.
Given an optimisation problem of the form , where , can we equivalently solve the problem ? Continue reading →
Metric sub-regularity is a local property of set-valued operators which turns out to be a key enabler for linear convergence in several operator-based algorithms. However, conditions are often stated for the fixed-point residual operator and become rather difficult to verify in practice. In this post we state sufficient metric sub-regularity conditions for a monotone inclusion which are easier to verify and we focus on the preconditioned proximal point method (P3M). Continue reading →
Where here we wonder why there are no generic termination criteria for the gradient method which guarantee a desired sub-optimality when is strictly (not strongly) convex and has -Lipschitz gradient. Does it make sense to terminate when is small? And when should we terminate? Continue reading →
The gradient method with constant step length is the simplest method for solving unconstrained optimisation problems involving a continuously differentiable function with Lipschitz-continuous gradient. The motivation for this post came after reading this Wikipedia article where it is stated that under certain assumptions the sequence converges to a local optimum, but it is no further discussion is provided. Continue reading →