5
$\begingroup$

I understand that a random variable $X$ and a probability measure $P$ on a space $(\Omega,\mathcal{A})$ induce the distribution $P_X$ on a space $(\Omega',\mathcal{A}')$.

But is there an example where it is important to differentiate between the distribution $P_X$ (the pushforward measure) and the probability measure $P$?

Is there a theorem that deals with different distributions $(P_X)_n$ but only with one probability measure $P$?

Or is this distinguishing between the two measure only formal?

  • 1
    I'm sure opinions will differ, but I would prefer to always just use the original measure $P$ and write probabilities related to the random variable $X$ in the form $P[X \in A]$ for sets $A$ that are in the Borel sigma-algebra $\mathcal{B}$. One could formally say that the function $v:\mathcal{B}\rightarrow\mathbb{R}$ defined by $v(A) = P[X\in A]$ is also a valid probability measure over $\mathcal{B}$ (but so what?)2017-02-12

1 Answers 1

3

Many characteristics of a random variable (the mean, variance, characteristic function, etc.) depend only on the distribution of that random variable. In some sense, writing down a triple like $(\Omega, \mathcal A, \mathbb P)$ is quite artificial.

I was once working on some problem with a probabilist. When I mentioned $\omega \in \Omega$, he remarked that means I was doing not probability but measure theory.