Can the non-linear conjugate gradient optimization method with Polak-Ribier line-search choice, be named as a quasi-Newton optimization technique?
If not, why?
Can the non-linear conjugate gradient optimization method with Polak-Ribier line-search choice, be named as a quasi-Newton optimization technique?
If not, why?
Nonlinear conjugate gradient methods are equivalent to memoryless BFGS quasi-Newton methods under the assumption that you perform an exact line search. This was stated in a 1978 paper by Shanno (he's the S in BFGS). Here's the relevant paragraph:
See http://dx.doi.org/10.1287/moor.3.3.244 for the whole paper.
A Newton based optimization method tries to find the roots of the gradient by Newton's method. This involves evaluating the Hessian and solving linear systems involving the Hessian. A quasi-Newton optimization technique use some approximation for the Hessian, or at least can be interpreted as implicitly using some approximation for the Hessian.
The non-linear conjugate gradient optimization method is not a quasi-Newton technique, because there is no Hessian involved, not even implicitly.