4
$\begingroup$

Can the non-linear conjugate gradient optimization method with Polak-Ribier line-search choice, be named as a quasi-Newton optimization technique?

If not, why?

2 Answers 2

3

Nonlinear conjugate gradient methods are equivalent to memoryless BFGS quasi-Newton methods under the assumption that you perform an exact line search. This was stated in a 1978 paper by Shanno (he's the S in BFGS). Here's the relevant paragraph:

enter image description here

See http://dx.doi.org/10.1287/moor.3.3.244 for the whole paper.

  • 0
    You have an approximate Hessian at every step. The quoted passage doesn't tell the whole story. The paper establishes that memoryless BFGS = nonlinear conjugate gradient.2014-05-07
0

A Newton based optimization method tries to find the roots of the gradient by Newton's method. This involves evaluating the Hessian and solving linear systems involving the Hessian. A quasi-Newton optimization technique use some approximation for the Hessian, or at least can be interpreted as implicitly using some approximation for the Hessian.

The non-linear conjugate gradient optimization method is not a quasi-Newton technique, because there is no Hessian involved, not even implicitly.

  • 0
    @littleO What I mean is that when you apply Newton's method for nonconvex optimization and in fact also for convex optimization, you also care about decreasing the objective. So you supplement Newton's method with a linesearch that computes a step size $t$ to that $f(x_k + t d_k)$ is sufficiently smaller than $f(x_k)$.2017-04-14