Let $\Omega$ be a countably infinite set and let $P:2^{\Omega} \to \mathbb{R}$ be a probability measure. Show that there exist $\omega_1, \omega_2 \in \Omega$ such that $P(\{\omega_1\}) \neq P(\{\omega_2\})$. Can it hold that $P(\omega) > 0$ for all $\omega \in \Omega$.
In an countably infinite set $\Omega$ (sample space), is it possible $P(\omega_1) \neq P(\omega_2)$ for $\omega_1, \omega_2 \in \Omega$
-
2What work have you done so far? – 2017-01-16
-
0[Oops, I clicked save by accident. I'll have more in sec] Well, since it is a countably infinite set, you couldn't define the probability the following way: $P(A) = \frac{|A|}{|\Omega|}$. This was the only way I can imagine $P({\omega_1}) = P(\omega_2)$, for all $\omega_1, \omega_2 \in \Omega$. But I feel like I can do nothing with this argument if we're talking about the first question. – 2017-01-16
-
0Let's imagine that $\Omega = \mathbb N$, as that's easier to work with. In this case, what's $P(\{1,2,3,\dots,100\})?$ Additionally, note that if $A$ is all odd numbers, and $B$ is all even numbers, then $A\cap B = \emptyset$, but $P(A)+P(B)$ is what? (is it $1$, like it should be, or something else?) – 2017-01-16
-
0About the first question: So I thought I could try to work with the axioma's of the probability measure, event space and sample space. But none seem te help me. The only one I can imagine helping was that if $$ A \in 2^{\Omega}, \text{ then } (\Omega \backslash A ) \in 2^{\Omega} $$ – 2017-01-16
-
0As a hint, a "Probability Measure on a Countably infinite set" is: 1. Not a continuous probability (it's be on an uncountably infinite set then), 2. Not a finite probability (such as the uniform). This leaves probability measures such as the poisson, geometric, etc. Looking at those might be helpful. – 2017-01-16
-
0Oh I see, thank you. I'll try to work from here! – 2017-01-16
2 Answers
Hint: Since $P$ is a probability measure it is $\sigma$-additive. In particular, for any enumeration $(\omega_i)_{i\geq 1}$ of the points of $\Omega$, $$1=P(\Omega)=P(\cup_{i=1}^\infty \omega_i)=\sum_{i=1}^\infty P(\omega_i).$$ What happens if $P(\omega_i)=P(\omega_j), \:\forall i,j\geq 1$? Can you really rule out $P(\omega)>0\: \forall\omega\in\Omega$?
-
0Is it that if $P(\omega_i)=P(\omega_j),\forall i,j≥1$, then $P(\Omega) > 1$ or, which isn't possible..? We know that if $P(\omega_i)=P(\omega_j),\forall i,j≥1$, then $P(\omega_i)=P(\omega_j) \neq 0$, because that would mean that $P(\Omega) = 0 \neq 1$, and if $P(\omega_i)=P(\omega_j) = p$ with $p \in [0,1]$, then $P(\Omega) = \infty$ since $\Omega$ is countably infinite? I think this was the contradiction I was looking for (but I'm not sure if all my steps are valid). – 2017-01-16
-
0You're right, but small typo: $P(\omega_i) = P(\omega_j) = 0$ should read $P(\omega_i) = P(\omega_j) \neq 0$ – 2017-01-16
-
0Almost. In your notation, when $P(\omega_i)=P(\omega_j),\forall i,j≥1$, if $p=0$ then $P(\Omega) = 0 \neq 1$ and if $p>0$ then $P(\Omega) = \infty \neq 1$ This is the contradiction you are looking for in order to answer your first question. – 2017-01-16
Thanks for the help! I think this is the answer.
Question 1: Show that there is $\omega_1, \omega_2 \in \Omega$ such that $P(\{\omega_1\}) \neq P(\{\omega_2\})$.
Suppose $P(\{\omega_i\}) = P(\{\omega_j\})$ for all $i,j \in \mathbb{N}$. If $P(\{\omega_i\}) = P(\{\omega_j\}) = 0$, then $P(\omega_1) + P(\omega_2) + \ldots = 0$. We also that if $A_1, A_2$ are disjoint events in $\mathcal{F} = 2^\Omega$ (event space), then $$ \sum_{i=1}^{\infty} P(A_i) = P\left( \bigcup_{i=1}^{\infty} A_i\right) $$ since $P: 2^{\Omega} \to \mathbb{R}$ is a probability measure. Since $\{\omega_i\} \cap \{\omega_j\} = \emptyset$ for all $i,j$ it means they are all disjoint events in $\mathcal{F}$. So it means that $$ P(\omega_1) + P(\omega_2) + \ldots = \sum_{i=1}^{\infty} P\{\omega_i\} = P\left(\bigcup_{i=1}^{\infty} \{\omega_i\} \right) = P(\Omega) = 0 $$ This leads to a contradiction, since $P(\Omega) = 1$ must hold, since $P$ is a probability measure.
Suppose $P(\{\omega_i\}) = P(\{\omega_j\}) = p, \forall i,j \in \mathbb{N}$ with $p \in [0,1]$. Then the sum$$ P(\{\omega_1\}) + P(\{\omega_2\}) + \ldots = \sum_{i=1}^{\infty} P(\{\omega_i\}) = \sum_{i=1}^{\infty} p = \infty $$ would diverge. And this would mean that $$ \sum_{i=1}^{\infty} P(\{\omega_i\}) = P\left(\bigcup_{i=1}^{\infty} \{\omega_i\} \right) = P(\Omega) = \infty $$ what would be an obvious contradiction with $P$ being a probability measure. In conclusion $\exists i,j$ such that $$ P(\{\omega_i\}) \neq P(\{\omega_j\}) $$
As for the second question: Can it hold that $P(\omega) > 0$ for all $\omega \in \Omega$. If we look at how many times we have to throw a fair coin to throw $k$ consecutive heads, then $\Omega = \mathbb{N}$, $\mathcal{F} = 2^{\Omega}$ and $P(\{n\}) = \left(\frac{1}{2}\right)^n$ for all $n \in \Omega$. So it can hold that $P(\Omega) > 0$ for all $\omega \in \Omega$
-
2There's a typo at the first contradiction: $P$ is the probability measure. Also, the geometric distribution example at the end specifies the waiting time for the first success (say, head) to appear, so $k=1$. – 2017-01-16