6
$\begingroup$

Let $Y$ be a random variable that takes values in some set $X$ according to a probability measure $\mu$. If the $\sigma$-algebra on which $\mu$ is defined is not $2^X$, then there exists $A \subset X$ with $\mu(A)$ undefined. This implies that the event "A realization $y$ of $Y$ satisfies $y \in A$" has undefined probability. But that can't be right: if we sample $Y$ over and over, the frequency with which our event comes true should converge to some value, so the event does have a probability.

Must all probability measures be defined on $2^X$? Or is my intuition that in the real world, all events have a probability wrong?

3 Answers 3

4

There are several reasons why we might work with $\sigma$-algebras smaller than $2^X$:

  1. We might want to draw a point from the unit interval $[0,1]$ in a way that no single point is drawn with positive probability. It is consistent with the usual axioms of set theory that this is impossible. In particular, it follows from a result of Ulam that this is impossible under the continuum hypothesis.

  2. Even stronger: We might want to have a uniform probability distribution on $[0,1]$. The Vitali construction shows that this is impossible.

  3. We are only interested in a small class of event in practice and want efficient ways to determine their information. For the Borel $\sigma$-algebra on $\mathbb{R}$, all probability distributions are determined by a cumulative distribution function and this is quite useful in practice.

There are also problems with the relative frequency interpretation. Suppose $X=\mathbb{N}$ and nature draws the sequence $1,2,3,\ldots$. Each number gets assigned zero probability, so the relative frequencies violate countable additivity.

  • 0
    Thanks. Reading between the lines on all these answers, I am gleaning that we *could* require that all probability measures be defined on $2^X$ by having $\mu$ assign each set its true probability, but by doing so, we (counter-intuitively) don't have countable additivity. Thus, we sometimes choose to sacrifice our ability to measure certain bizarre sets in exchange for the privilege of assuming countable additivity on the sets that we can measure.2012-12-26
2

In the "real world", there is neither true probability nor are there non-measureable sets. In the real world yuo might not be able to repeat a "random experiment" often enough to obtain something that conclusively indicates that convergence would occur if you repeated the experiment infinitely often (which is something you can't do in the first place). We'd expect that a uniform random number in $[0,1]$ is in a set $A\subset [0,1]$ with probability proportional to the "area" of $A$. However, we cannot nicely define such an area measure for all subsets of $[0,1]$.

2

A somehow complementary answer to that of Hagen would be that in probability theory, as opposed to measure theory, sigma fields have a much deeper meaning than just a technical framework necessary to define a measure.

Probabilists tend to equip their probability spaces with lots of different sigma-fields (which are sub-sigma-fields of the $\mathcal F$ that comes in the definition of the probability space), and view them as "amounts of information that an observer may have". For example, to an observer who knows just the value of a random variable $X$ and nothing else would correspond the sigma-field $\sigma(X) = \{X^{-1}[A] | A \text{ Borel}\}$.

In the light of this intuition we can say that a set is measurable iff we have enough information to say whether $\omega$ lies there or not. And "bad" subsets of, say, $\mathbb{R}$, are actually so bad that we cannot measure an outcome of an experiment "precisely" enough to say whether it lies there.