4
$\begingroup$

Let $E$ be a finite dimensional real vector space with a norm $\|.\|$. Define integral of mesurable functions with value in $E$ by choosing a basis and integrate componentwise. How do we prove the triangle inequality : $$ \left\| \int_X f \right\| \leq \int_X \| f \|, $$ ($f : X \rightarrow E $) ?

I know a proof when $E=\mathbb{C}$ or $\mathbb{H}$, i.e. when $\|.\|=\|.\|_2$ in dimension $2$ or $4$ (the key is to use the multiplicative law). (I think) I also came up to proof of this inequality by using approximation of $f$ by simple functions, but my proof is very messy...

  • 8
    How did you define the integral?2011-07-20
  • 2
    Hint: you can express the norm of a vector by the values linear functionals give when they act on it, and your integral has the property that it commutes with (bounded) linear transformations.2011-07-20
  • 0
    @Mariano : choose a basis $(e_i)$, write $f=\sum f_i e_i$ and define $\int f := \sum (\int f_i) e_i$. It is easy to prove that it is independant of the basis.2011-07-20
  • 0
    @Mark : Oh yes ! That's very nice.2011-07-20
  • 2
    See also [Bochner integral](http://en.wikipedia.org/wiki/Bochner_integral).2011-07-20
  • 3
    @Theo: the OP asks a question about *finite dimensional* real vector spaces, and you point him to the Bochner integral?! I'm usually in favour of killing mosquitoes with bazookas, but dropping the Tsar Bomba on it is a bit excessive.2011-07-20
  • 1
    Retagging as real-analysis, since the tag functional-analysis may have been what prompted Theo's comment. (Functional analysis usually deals with infinite dimensional spaces.)2011-07-20
  • 3
    @Willie: I can't follow and I don't see a Bazooka or a Tsar Bomba here. I was pointing to something the OP should be interested in as a straightforward extension of these ideas. Also, proving the inequality for Bochner has the same level of difficulty and gives a basis-independent description from the start.2011-07-20
  • 1
    @Theo: I dunno, I'd think that the more straightforward generalisation for these ideas would be the Pettis integral (see also Mark Schwarzmann's comment), for which you can rather straightforwardly also prove Jensen's inequality in one-fell-swoop. I think the definition of the Bochner integral carries too much unnecessary baggage.2011-07-20
  • 1
    @Willie: I really don't understand your complaint about Bochner (there is one thing only you need to define: a simple function then the rest is taken care of by completeness), but let's just agree that we disagree.2011-07-20
  • 0
    I don't really understand it either, but it sure is funny.2011-07-20
  • 2
    The Pettis integral has the advantage that one doesn't really need to develop a new integration theory, as everything is reduced to the scalar case. The definition of the Bochner integral of a general function is not technically complicated, but it sometimes baffles people who are used to the approach where integrals of non-negative functions are defined first and then generalized to general functions.2011-07-20

1 Answers 1

2

Since Mark already gave, in the comments, a proof using duals, let me sketch here a proof using convexity.

We make the following simplifying assumptions:

  1. You are integrating over a space $X$ with finite total volume. (If not, approximate $f$ by cut-offs of $f$ on subsets of finite volume. That $f$ is integrable guarantees that you can do so (Chebychev's inequality).)
  2. $X$ has total volume 1. This you can do by rescaling, since the norm scales linearly by definition.

Observe that the norm is a convex function. We shall prove here Jensen's inequality for a probability space, which will then imply the desired triangle inequality.

Theorem (Jensen's inequality) Let $(X,\Sigma,\mu)$ be a probability space (that is, it is a measure space with total volume 1). Let $f:X\mapsto V$ be an integrable function taking values in some (real) topological vector space $V$. Let $\Psi:V\to\mathbb{R}$ a convex function, then we have $$ \Psi(\int f d\mu) \leq \int \Psi(f) d\mu $$

Sketch of Proof:

Let $g = \int f d\mu \in V$. By convexity, there exists a subdifferential of $\Psi$ at $g$, in the sense that there exists a linear functional $k\in V^*$ such that $\Psi(g) + k(h-g) \leq \Psi(h) $ for any $h\in V$. (This is the generalisation of the supporting hyperplane theorem; in the finite dimensional case you can just use the supporting hyperplane theorem.) Integrate the expression we get

$$ \int \Psi(g) d\mu + \int k(f-g) d\mu \leq \int \Psi(f) d\mu $$

Since the space has total mass 1, and $g$ is independent of the position $x\in X$, the first integral on the LHS is just $\Psi(g) = \Psi(\int f d\mu)$. Now $k$ is a linear functional, so it commutes with integration, but

$$ \int (f-g)d\mu = \int f d\mu - \int f d\mu = 0 $$

so the second term on the LHS is 0. Q.E.D.