10
$\begingroup$

We learned that if $V$ is a finite inner product space then for every linear transformation $T:V\to V$, there exists a unique linear transformation $T^*:V\to V$ such that $\forall u, v \in V: (Tv, u)=(v, T^*u)$.

The construction of $T^*$ used the fact that $V$ is finite and therefore has an orthonormal basis, which is not the case had it been infinite.

Are there infinite dimension inner product spaces such that not all linear transformations have an adjoint? Or is it somehow possible to extend this definition to infinite spaces as well?

3 Answers 3

8

This is true for (edit: bounded operators on) Hilbert spaces thanks to the Riesz representation theorem. It is false in general: let $e_1, e_2, ...$ be a sequence of orthogonal unit vectors in some infinite-dimensional Hilbert space and let $V$ be their span. Then the linear transformation

$$T(e_i) = e_1 + e_2 + ... + e_i$$

does not have an adjoint, since $\langle T(e_i), e_j \rangle = 1$ whenever $j \le i$, but $\langle e_i, T^{\ast}(e_j) \rangle$ must be $0$ for sufficiently large $i$ and fixed $j$ for any linear operator $V \to V$. This example can be modified so that $T$ is bounded.

If you are not familiar with Hilbert space theory, beware that the definition of "orthonormal basis" is different: it does not refer to a Hamel basis (which is what the word "basis" ordinarily means) but a collection of orthogonal unit vectors such that only the zero vector is orthogonal to all of them. Equivalently, it refers to a collection of orthogonal unit vectors whose span is dense (not the whole space).

  • 0
    How do you mean "such that every vector is orthogonal to all of them"? Such that every vector in the orthonormal basis is orthogonal to all other vectors in the basis? Or did you mean "such that only the zero vector is orthogonal to all of them"?2011-03-26
  • 0
    Also, how do you mean "this example can be modified so that $T$ is bounded"? As you write further up, bounded operators have adjoints, so you can't mean that the example can be modified so that $T$ is bounded but still doesn't have an adjoint?2011-03-26
  • 0
    Thanks. I'm indeed unfamiliar with Hilbert spaces. I was trying to think of $R[x]$ with the inner product $\int_{a}^{b} f(x)g(x)dx$.2011-03-26
  • 0
    @daniel.jackson: That's a Hilbert space, with orthogonal polynomials as an orthonormal basis; see http://en.wikipedia.org/wiki/Orthogonal_polynomials. In fact, in this case the orthogonal polynomials form a (Hamel) basis in the usual sense, since all polynomials are finite linear combinations of them. But you could look at a suitable space of functions (perhaps all uniformly continuous functions) on that interval and then the orthogonal polynomials would still form an orthonormal basis (though not a Hamel basis) since they can arbitrarily closely approximate all the functions.2011-03-26
  • 0
    @joriki: no, it's not. It's not complete. As for your first question, $V$ is also not complete. You can modify $T$ so that the coefficients decrease quickly enough for it to be bounded.2011-03-26
  • 0
    @daniel.jackson: @Qiaochu is right; the space of just the polynomials is not a Hilbert space because it's not complete, but a suitable function space, for instance the space of all square-integrable functions on $[a,b]$, is a Hilbert space and orthogonal polynomials form a (Hilbert, not Hamel) basis for it.2011-03-26
4

Let's look at an example. Take the vector space $\mathbb{R}[x]$ of all real polynomials in one variable and define an inner product as

$$\left( \sum_{j=0}^n a_jx^j, \sum_{k=0}^m b_kx^k \right)=\sum_{h=0}^{\min(n,m)}a_hb_h.$$

Now let $T$ be the linear operator such that

$$T\sum_{j=0}^n a_jx^j=\sum_{j=0}^n a_j+\left(\sum_{j=1}^na_j\right)x+\ldots + (a_{n-1}+a_n)x^{n-1}+a_nx^n.$$

Think $T$ like the operator represented by the infinite matrix below:

$$\begin{bmatrix} 1 & 1 & 1 & \ldots \\ 0 & 1 & 1 & \ldots \\ 0 & 0 & 1 & \ldots \\ \vdots & \vdots & \vdots & \ddots \\ \end{bmatrix}$$

Should $T$ have an adjoint with respect to the inner product $(,)$, it should be somehow associated to this infinite matrix:

$$\begin{bmatrix} 1 & 0 & 0 & \ldots \\ 1 & 1 & 0 & \ldots \\ 1 & 1 & 1 & \ldots \\ \vdots & \vdots & \vdots & \ddots \\ \end{bmatrix}$$

but this makes no sense in $\mathbb{R}[x]$. Formally, let's suppose such an adjoint operator $T^\star$ exists. Fix $k \in \mathbb{N}$: who is $T^\star x^k$? For all $n=0, 1,\ldots$, we should have

$$(x^n, T^\star x^k)=(T x^n, x^k)=(1+\ldots+ x^n, x^k)=\begin{cases}1 & n \ge k \\ 0 & n

this means that $T^\star x^k$ should be a polynomial with degree $\ge n$ for all $n\in \mathbb{N}$. So, $T$ hasn't got an adjoint.


The reason for it is that the mapping

$$P \in \mathbb{R}[x] \mapsto (P, \cdot) \in \mathbb{R}[x]^\star$$

(here $\mathbb{R}[x]^\star$ is the algebraic dual space of $\mathbb{R}[x]$) is not an isomorphism, because it is not surjective. In fact it can be shown that $\mathbb{R}[x]^\star$ can be represented as $\mathbb{R}[[x]]$, the space of formal power series with real coefficients.

  • 0
    Note: Qiaochu's operator $T$ follows more or less the same idea as mine. The difference is that I didn't want to speak about topologies. Instead I tried to keep things on purely algebraic grounds.2011-03-26
  • 0
    There is no topology in my example; it's the same as yours.2011-03-26
  • 1
    @Qiaochu, @dissonance: I think this may be the same misunderstanding that prompted by question about modifying $T$ to be bounded: Because you start off talking about Hilbert spaces and choose your vectors from a Hilbert space, it seems as if your example refers to Hilbert spaces (and hence topology), whereas in fact you never use the fact that these vectors originally came from a Hilbert space, and you only consider the (incomplete) space they span, so you could have just taken them from any inner product space.2011-03-26
  • 0
    @joriki, @Qiaochu: Exactly, joriki! I mean, my example is nothing more than a reformulation of Qiaochu's one. Then why did I publish it? Because it doesn't make use of Hilbert space language but only of familiar objects like polynomials and matrices. So I thought it might add something to the discussion, especially because the OP says he has little familiarity with Hilbert spaces. Sorry for the misunderstanding, Qiaochu.2011-03-26
3

It is possible to define adjoints on infinite-dimensional inner product spaces, but things get more complicated. Bounded linear operators on Hilbert spaces always have well-behaved adjoints, but for unbounded operators the domain of the adjoint may differ from the domain of the operator, and may in fact be just the zero subspace. For the nice case, see this. For the uglier cases, see this, especially the examples beginning on p. 15.

  • 0
    I tried understanding the links you provided but I'm still in doubt: is it possible to find an infinite inner product space and a linear transformation $T$, such that no linear transformation $S$ exists such that $(Tv, u)=(v, Su)$?2011-03-26
  • 0
    @daniel.jackson: Yes, it's possible; both Qiaochu and dissonance construct such a $T$ in their answers.2011-03-26