# Continuity of argmin

Where here we ask what happens to the infima and sets of minimisers of sequences of functions ? under what conditions do these converge? what is an appropriate notion of convergence for functions which transfers the convergence to the corresponding sequence of its minima and minimizers? This poses a question of *continuity* for the infimum (as an operator) as well as the set of minimisers (as a multi-valued operator). We aim at characterising the continuity of these operators. Continue reading →

# Projection on epigraph via a proximal operator

A while ago I posted this article on how to project on the epigraph of a convex function where I derived the optimality conditions and the KKT conditions. This post comes as an addendum proving a third way to project on an epigraph. Do read the previous article first because I use the same notation here. Continue reading →

# Lagrange vs Fenchel Duality

In this post we discuss the correspondence between the Lagrangian and the Fenchelian duality frameworks and we trace their common origin to the concept of convex conjugate functions and perturbed optimization problems. Continue reading →

# Quadratic constraints to second-order conic ones

In the previous post we discussed how we can project onto the epigraph of the squared norm. However, when in an optimisation problem we encounter constraints of the form

That is, quadratic constraints (of the form ), these can be converted to second-order conic constraints. Continue reading →

# Projection on the epigraph of the squared Euclidean norm

As a follow-up on the previous post titled Projection on an epigraph, we here discuss how we can project on the epigraph of the squared norm function. Continue reading →

# Projection on an epigraph

Here we study the problem of projecting onto the *epigraph* of a convex continuous function. Unlike the computation of the proximal operator of a function or the projection on its *sublevel sets, *the projection onto epigraphs is more complex and there exist only a few functions for which semi-explicit formulas are available.

# Variable substitution in optimisation: a paradox

Given an optimisation problem of the form , where , can we equivalently solve the problem ? Continue reading →

# Metric subregularity for monotone inclusions

Metric sub-regularity is a local property of set-valued operators which turns out to be a key enabler for linear convergence in several operator-based algorithms. However, conditions are often stated for the fixed-point residual operator and become rather difficult to verify in practice. In this post we state sufficient metric sub-regularity conditions for a monotone inclusion which are easier to verify and we focus on the preconditioned proximal point method (P3M). Continue reading →

# Do no generic termination criteria exist for steepest descent?

Where here we wonder why there are no generic termination criteria for the gradient method which guarantee a desired sub-optimality when is strictly (not strongly) convex and has -Lipschitz gradient. Does it make sense to terminate when is small? And when should we terminate? Continue reading →

# Convergence of the iterates of the gradient method with constant stepsize

The gradient method with constant step length is the simplest method for solving unconstrained optimisation problems involving a continuously differentiable function with Lipschitz-continuous gradient. The motivation for this post came after reading this Wikipedia article where it is stated that under certain assumptions the sequence converges to a local optimum, but it is no further discussion is provided. Continue reading →