15
$\begingroup$

From Wikipedia:

The vector space of (equivalence classes of) measurable functions on $(S, Σ, μ)$ is denoted $L^0(S, Σ, μ)$.

This doesn't seem connected to the definition of $L^p(S, Σ, μ), \forall p \in (0, \infty)$ as being the set of measurable functions $f$ such that $\int_S |f|^p d\mu <\infty$. So I wonder if I miss any connection, and why use the notation $L^0$ if there is no connection?

Thanks and regards!

  • 1
    The notation $L^0$ suggests that you are using the "$L^0$ norm."2012-12-28
  • 0
    @Qiaochu:Thanks! How is the $L^0$ norm defined? I didn't find it in the Wikipedia article.2012-12-28
  • 1
    it's defined in the Wikipedia article (not as a norm, which is why I used quotes). The Wikipedia article defines the topology induced by the "norm," which is convergence in measure.2012-12-28
  • 0
    Still not found.2012-12-28
  • 1
    The name $L^0$-norm is used for at least two different things, neither of which is a norm. One of them is $\int \frac{|f|}{1+|f|}$, the other is $\mu\{x: f(x)\ne 0\}$. Here it's the former.2012-12-28
  • 0
    @PavelM:Thanks! Where did you find the two definitions? By "different", I guess you think they are not equivalent norms?2012-12-28
  • 1
    @Tim: I am not sure how much clearer I can be. It is literally the second sentence in the part of the Wikipedia article you link from: "By definition, it contains all the $L^p$, and is equipped with the topology of convergence in measure."2012-12-28
  • 0
    Well, since they are not norms, they cannot be equivalent norms. However, the first one is always bounded by the second, which I am sure you can prove yourself. In the other direction we have the following example: on $[0,1]$ the functions $f_n(x)=1/n$ have second "norm" (measure of support) equal to $1$ while the first "norm" tends to $0$ as $n\to \infty$. .... "$L^0$ norm" as the size of support is used in [compressed sensing](http://en.wikipedia.org/wiki/Compressed_sensing). The other one is a standard way to metrize convergence in measure.2012-12-28
  • 0
    @QiaochuYuan: I guess I still miss something here. Does convergence in measure define the $L^0$ norm, and how?2012-12-28
  • 1
    @Tim: yes, except that "norm" is in quotes. Convergence in measure is "the topology that would be defined by the $L^0$ norm if there were such a thing," or something like that.2012-12-28
  • 0
    @PavelM: Thanks! (1) What are the names for the two "norms" and/or their induced metrics? Is $\mu(f \neq 0)$ called the size of support of $f$? What is the name for the other one? (2) Are they some generalized types of norms? Are the "metrics" induced from them metrics or some generalized metrics?2012-12-28
  • 1
    (1) I don't think they have established names. I used "size" informally, it's more precise to call $\mu\{f\ne 0\}$ the measure of support of $f$. This is sufficiently descriptive and descriptive. I don't know of a good name for the other. Note that the choice of integrand $|f|/(1+|f|)$ is pretty arbitrary: one could use $\min(|f|,1)$ or $\tan^{-1}|f|$, or lots of other bounded functions for the same purpose.... They are both [quasinorms](http://en.wikipedia.org/wiki/Quasinorm), you can take $K=2$ in the definition.2012-12-28
  • 1
    (2) They both induce legitimate metrics. The proof of triangle inequality $d(f,h)\le d(f,g)+d(g,h)$ is easier for the second, because $\{f\ne h\}\subset \{f\ne g\}\cup \{g\ne h\}$. For the first one it's a little longer: one has to show that the function $\psi(t)=t/(1+t)$ is *subadditive*: $\psi(t+s)\le \psi(t)+\psi(s)$ for all $t,s\ge 0$. It then follows that $\psi(|f-h|)\le \psi(|f-g|)+\psi(|g-h|)$, and integration gives the triangle inequality.2012-12-28
  • 0
    Thanks, @PavelM! Do you have some references on the two "$Ll^0$ norms" among others? I am curious where people learn these things from.2012-12-28
  • 0
    @Tim One of my comments above has a reference to Wikipedia article on compressed sensing: this is where the "measure of support" is used all the time. The other ones are found in textbooks on real analysis, specifically in a section that discusses [convergence in measure](http://en.wikipedia.org/wiki/Convergence_in_measure). E.g., Folland's *Real Analysis*, or Royden's book by the same title, etc.2012-12-28
  • 0
    @PavelM: Thanks! In an earlier comment, " the first one is always bounded by the second, which I am sure you can prove yourself. In the other direction we have the following example: on [0,1] the functions $f_n(x)=1/n$ have second "norm" (measure of support) equal to 1 while the first "norm" tends to 0 as n→∞". I think the example "in the other direction" is still for the first direction: "the first one is always bounded by the second"?2012-12-29
  • 0
    Let's see: "first norm" of $f_n$ is $\int_0^1 \frac{1/n}{1+1/n} dx = \frac{1}{n+1}$. "Second norm" (measure of support) is exactly $1$. So, this example demonstrates than one **cannot** have a bound of the form "second norm"$\le $constant$*$"first norm". Of course, this example conforms to the statement "first norm"$\le$"second norm", but that does not tell us much: we can't prove a general statement by showing an example for which it is true.2012-12-29
  • 0
    To prove that "first norm"$\le$"second norm", one argues as follows: for any value $f(x)$ we have $|f(x)|\le |f(x)|+1$, hence $\frac{|f(x)|}{|f(x)|+1}\le 1$. Integrating this inequality over the set $\{f\ne 0\}$, we obtain that "first norm" $=\int_{\{f\ne 0\}} \frac{|f(x)|}{|f(x)|+1}\,d\mu \le \int_{\{f\ne 0\}} 1\,d\mu =\mu \{f\ne 0\}$, as claimed.2012-12-29
  • 0
    Thanks, @PavelM! "that the choice of integrand $|f|/(1+|f|)$ is pretty arbitrary: one could use $\min(|f|,1)$ or $tan^{-1} |f|$, or lots of other bounded functions for the same purpose.... They are both quasinorms". I was wondering by the choices being arbitrary for the integrand, you are saying they all lead to convergence in measure? Although arbitrary, I guess not any function can be the integrand, isn't it? I searched in Folland's and Royden's books, but didn't see other choices of integrands except $|f|/(1+|f|)$.2012-12-29
  • 0
    Yes, I meant that those functions give other metrics for which the notion off convergence coincides with convergence in measure. What is needed here: be equal to 0 at 0, increasing, bounded, and subadditive. Any such function could be used in place of t/(1+t). Since t/(1+t) does the job and is simple enough, it gets used in the books.2012-12-29
  • 0
    @PavelM: Thanks! Sorry for asking: why "What is needed here: be equal to 0 at 0, increasing, bounded, and subadditive"? is it mentioned in some references?2012-12-29
  • 0
    @Tim No, there is no particular reason to mention things like that in a book. I came up with these properties thinking of what is needed to show that (a) $\int \psi(|f-g|)$ is a metric; (b) convergence in this metric is equivalent to convergence is measure.2012-12-29

4 Answers 4

12

Note that when we restrict ourselves to the probability measures, then this terminology makes sense: $L^p$ is the space of those (equivalence classes of) measurable functions $f$ satisfying $$\int |f|^p<\infty.$$ Therefore $L^0$ should be the space of those (equivalence classes of) measurable functions $f$ satisfying $$\int |f|^0=\int 1=1<\infty,$$ that is the space of all (equivalence classes of) measurable functions $f$. And it is indeed the case.

  • 0
    Can you also account for the choice of topology using such heuristics? (sorry for this word, I can't think of a better one)2012-12-28
  • 0
    @Martin I can't figure out a totally convincing reason for such a choice of topology but IMHO it has something to do with the fact, that the convergence in the $p$-th norm implies convergence in probability. For $p<1$ we dont have a norm but we still got a metric. I think ( but please check it!) that the topology of the convergence in probability might be the largest topology smaller than all $L^p$ topologies for $p>0$. Take it with a grain of salt and check if it makes sense, it is just my intuition and it does sound good :)2012-12-28
  • 0
    Sounds especially good because the $L^p$ spaces are nested if we consider probability measures.2012-12-28
  • 0
    Correction: I wrote "largest topology smaller", but it should be "smallest topology larger"2012-12-28
6

If the measure of $S$ is finite, the $L^p$ spaces are nested: $L^{p}\subset L^q$ whenever $p\ge q$. The smaller the exponent, the larger the space. Since the space of measurable functions contains all of the $L^p$ for $p>0$, one may be tempted to denote it by $L^0$.

This temptation should be resisted and the notation $L^0$ banished from usage. [/rant]

  • 0
    +1 I was about to write the same thing, excluding the rant :-)2012-12-28
  • 0
    Nice +1. Thanks!2012-12-28
  • 0
    Is there a reason behind the rant?2013-01-03
  • 0
    @MichaelGreinecker Even leaving the two-competing-norms-that-are-not-norms aside, the notation suggests further ambiguity: is $L^0=\{f : \int |f|^0<\infty\}$ (taking $0^0=0$) or $L^0=\{f : \lim_{p\to 0} \|f\|_p<\infty\}$? These are different even on spaces of finite measure.2013-01-03
  • 0
    @Pavel Thank you, I see your point.2013-01-03
5

I do think that $L^0$ is nice usage. As is well-known $\lim_{p \to \infty} \|\cdot \|_{L^p} = \|\cdot\|_\infty$ for certain spaces or functions. The case for $L^0$ is not that pretty, but at least still nice.

Recall the distribution function $\mu$ of $f$ given by, $$\mu(\alpha) := \mu_f(\alpha) := \mu\{|f|>\alpha\}.$$

Fubini gives that, $$\|f\|_{L^p}^p = p \int_0^\infty \mu_f(\alpha) \alpha^p \frac{\mathrm{d}\alpha}{\alpha}.$$

We can define the Lorentz spaces in a similar way. And indeed, for a finite measure space, we have if $p < q$ that $$L^q \subseteq L^p.$$ Hence, it is natural to define $L^0$ as, $$L^0 = \bigcup_{p > 0} L^p.$$ We would like to have that $L^0$ is also complete as a metric space, otherwise the notation would be quite deceiving indeed. For this we need a notation of convergence. On $L^p$ for $0 < p < 1$ it is not the norm that induces the metric, but it is $\|\cdot\|_p^p$.

So, for $0 < p < 1$ we have, $$d_p(f, g) = p \int_0^\infty \mu\{|f - g|>\alpha\} \alpha^p \frac{\mathrm{d}\alpha}{\alpha}.$$

$\varepsilon$-neighborhoods $N^p_\varepsilon$ of $f$ in $L^p$ are then given by $$N^p_\varepsilon(f) = \Biggl\{g : p \int_0^\infty \mu\{|f - g|>\alpha\} \alpha^p \frac{\mathrm{d}\alpha}{\alpha} < \varepsilon \Biggr\}.$$

Too be continued, I wanted to give a brief remark, but I have decided otherwise in due progress.

  • 0
    But $L^0 \supsetneqq \bigcup_{p \gt 0} L^p$. The right hand side is dense, but not everything. Partition $[0,1]$ into countably many intervals. On the $n$th piece take a function which is in $L^{\frac1n}$ but not in $L^p$ for $p \gt \frac1n$. You'll get a measurable function which is not in any $L^p$ for $p \gt 0$.2012-12-29
  • 0
    @Martin: True, I'll modify that. You want to end up with convergence in probability.2012-12-29
  • 0
    Agreed. I'm confident that something along the lines suggested by Godot in the comments to his answer works, and that it should look quite close what you're doing. I guess the abstract point is that when taking the direct limit of complete metric spaces you need to take the completion to end up with a complete space.2012-12-29
  • 0
    Yes, indeed. But, needs a bit of work as you need to mention the topology first. 8-).2012-12-29
  • 0
    @JonasTeuwen: I know this post was made some time ago, but do you have any thoughts on how to arrive at the $L^{0}$ topology of convergence in measure as a "limit" of the $L^{p}$ topologies?2014-06-30