Monthly Archives: November, 2016

Do no generic termination criteria exist for steepest descent?

Where here we wonder why there are no generic termination criteria for the gradient method which guarantee a desired sub-optimality when f is strictly (not strongly) convex and has L-Lipschitz gradient. Does it make sense to terminate when \|\nabla f(x_k)\| is small? And when should we terminate? Continue reading →

Advertisements

Convergence of the iterates of the gradient method with constant stepsize

The gradient method with constant step length is the simplest method for solving unconstrained optimisation problems involving a continuously differentiable function with Lipschitz-continuous gradient. The motivation for this post came after reading this Wikipedia article where it is stated that under certain assumptions the sequence \{x_k\} converges to a local optimum, but it is no further discussion is provided. Continue reading →

Third and higher order Taylor expansions in several variables

In this post we show that it is possible to derive third and higer-order Taylor expansions for functions of several variables. Given that the gradient of a function f:\mathbb{R}^n \to\mathbb{R} is vector-valued and its Hessian is matrix-valued, it is natural to guess that its third-order gradient will be tensor-valued. However, not only is the use of tensors not very convenient, but in this context it is also unnecessary. Continue reading →

mathbabe

Exploring and venting about quantitative issues

Look at the corners!

The math blog of Dmitry Ostrovsky

The Unapologetic Mathematician

Mathematics for the interested outsider

Almost Sure

A random mathematical blog

Mathematix

Mathematix is the mathematician of the village of Asterix