0
$\begingroup$

In most mathematical optimization books or research papers, the authors explicitly precise that the studied algorithm works in a Hilbert space.

Just a simple question then: is there some cases where we have to do optimizations in non-Hilbert spaces, or do they precise it just to be mathematically rigorous, so we know that we have all the tools we need at hand?

1 Answers 1

1

A close analogue of a Hilbert space (a complete inner product space) is a Banach space (a complete normed vector space). An example of the difference is $L^p(X)$ for $p\in[1,\infty)$ is always a Banach space (with norm $||f||_{L^p(X)} = \left(\int_X |f(x)|^pdx\right)^{1/p}$), but additionally when $p = 2$ it is the Hilbert Space $L^2(X)$, with the same norm, but additionally with the inner product $\langle f,g\rangle_{L^2} = \int_X f(x)\overline{g(x)}dx$. Hopefully this shows the difference between the two.

In Hilbert spaces, there's a result called the Minimum Principle:

A non-empty closed convex set in a Hilbert space has a unique element of least norm.

This fails in Banach spaces in easy to construct ways.

So, essentially in Hilbert spaces we can talk about "smallest elements" in a well-defined way, but we generally can't in Banach spaces.