1
$\begingroup$

The tests I took of this subject, a year ago, were pretty easy, but the questions that weren't mandatory were insane. Here's one that haunts me to this day.

If $X$ is a random variable with $\text{Var}[X] = 0$, show that

$\mathbb P\{X = \mathbb E[X]\} = 1\quad.$

I know it is an utterly obvious fact, but proving it is quite hard.

  • 1
    $@$Luke: you haven't asked a question yet....2011-07-05

4 Answers 4

5

This is an elementary result in the setting of measure theory. Indeed, $ {\rm Var}(X):={\rm E}[X - {\rm E}(X)]^2 = \int_\Omega {(X - {\rm E}(X))^2 \,{\rm dP}} , $ so since $(X- {\rm E}(X))^2$ is a nonnegative random variable, it follows from $ \int_\Omega {(X - {\rm E}(X))^2 \,{\rm dP}} = 0 $ that $(X- {\rm E}(X))^2 = 0$ with probability $1$, hence also ${\rm P}(X = {\rm E}(X)) = 1$.

EDIT: While the above proof is elementary in the setting of measure theory, quite obviously this is not the intended approach here. Let me elaborate on Henry's answer (simple approach). Letting $Y=(X-{\rm E}(X))^2$, we want to show that ${\rm E}(Y)=0$ (that is ${\rm Var}(X)=0$) implies ${\rm P}(X={\rm E}(X))=1$. We now show that for any nonnegative random variable $Y$, the condition ${\rm E}(Y)=0$ implies ${\rm P}(Y=0)=1$, thus yielding the desired result noting that $Y=0$ iff $X={\rm E}(X)$, when $Y=(X-{\rm E}(X))^2$. So, let $Y$ be a nonnegative random variable with ${\rm E}(Y)=0$. Fix $a > 0$. Noting that the condition ${\rm P}(Y > a) = b$ for some $b > 0$ implies ${\rm E}(Y) \geq ab > 0$ (indeed, ${\rm E}(Y) = \int_{[0,\infty )} {yF(dy)} \ge \int_{(a,\infty )} {yF(dy)} \ge a\int_{(a,\infty )} {F(dy)} = a{\rm P}(Y > a) = ab$, where $F$ is the distribution of $Y$), we conclude that ${\rm P}(Y > a) = 0$, or ${\rm P}(Y \leq a)=1$ (for any a > 0). On the other hand, ${\rm P}(Y \leq a)=0$ for all $a < 0$. Hence the distribution function of $Y$ is that of the constant random variable $0$ (recall that a distribution function is, in particular, right-continuous). Thus ${\rm P}(Y \leq 0) = 1$, hence also ${\rm P}(Y = 0) = 1$.

  • 0
    @aengle: Your argument is obviously true in the case where $f$ is a continuous function, but this is a very particular case (of a measurable function).2011-07-06
2

Start with a non-negative random variable $Y$ and show $\Pr\{Y>0\} > 0$ implies $\mathbb E[Y]>0$, since if $\Pr\{Y\ge a\} \ge b$ for some positive $a$ and $b$ then $\mathbb E[Y] \ge ab.$

Then, if the mean and variance of $X$ exist, let $Y = (X-\mathbb E[X])^2$ to show the contrapositive that $\Pr\{X \not = \mathbb E[X]\} > 0$ implies $\text{Var}[X] > 0$.

  • 0
    @Shai: What for?2011-07-05
2

It seems to me that Chebyshev's inequality is one way to deal with this question, if that was covered on your course. Letting $\mu$ denote $E(X)$, and $\sigma^{2}$ denote ${\rm var}(X)$. Then Chebyshev's inequality (which holds for any probability distribution with finite variance) states that ${\rm Pr}(|X - \mu| > k \sigma) \leq \frac{1}{k^{2}}$ for any positive $k$. But if $\sigma = 0$, then $k$ can be taken arbitrarily large, giving ${\rm Pr}(|X - \mu| > 0 ) = 0.$ Of course, this doesn't help it Chebyshev's inequality was not on your course, but reading a proof of the inequality might be instructive in that case.

  • 0
    As a matter of fact, it was. Thanks.2011-07-05
1

Note that $X$ cannot be continuous, since all continuous random variables have positive variance, from the formula

$\mathrm{Var}(X) = \int[(x - E(X)]^2f(x)\;\mathrm{d}x > 0\quad.$

Now we know $X$ is discrete. Suppose $X$ has positive probability of being something other than $E(X)$. This would obviously cause it to have positive variance, from the formula

$\mathrm{Var}(X) = E([X - E(X)]^2) > \mathrm{P}(X = Y)[Y-E(X)]^2> 0\quad\mathrm{where}\quad Y\not=E(X) \quad.$

Therefore, we know that $X$ can only have positive probability of being $E[X]$.

  • 0
    That's OK, I improve formatting all the time. I believe knowing — and remembering — $\LaTeX$ to be highly convenient, but not mandatory in this site. You're welcome.2011-07-06