7
$\begingroup$

Edit

(As Robert pointed out, what I was trying to prove is incorrect. So now I ask the right question here, to avoid duplicate question)

For infinite independent Bernoulli trials with probability $p$ to success, define a random variable N which equals to the number of successful trial. Intuitively, we know if $p > 0$, $\Pr \{N < \infty \} = 0$, in other word $N \rightarrow \infty$. But I got stuck when I try to prove it mathematically.

\begin{aligned} \Pr \{ N < \infty \} & = \Pr \{ \cup_{n=1}^{\infty} [N \le n] \} \\ & = \lim_{n \rightarrow \infty} \Pr \{ N \le n \} \\ & = \lim_{n \rightarrow \infty}\sum_{i=1}^{n} b(i; \infty, p) \\ & = \sum_{i=1}^{\infty} b(i; \infty, p) \\ \end{aligned}

I've totally no idea how to calculate the last expression.


(Original Question)

For infinite independent Bernoulli trials with probability $p$ to success, define a random variable N which equals to the number of successful trial. Can we prove that $\Pr \{N < \infty \} = 1$ by:

\begin{aligned} \Pr \{ N < \infty \} & = \Pr \{ \cup_{n=1}^{\infty} [N \le n] \} \\ & = \lim_{n \rightarrow \infty} \Pr \{ N \le n \} \\ & = \lim_{n \rightarrow \infty}\sum_{i=1}^{n} b(i; \infty, p) \\ & = \sum_{i=1}^{\infty} b(i; \infty, p) \\ & = \lim_{m \rightarrow \infty}\sum_{i=1}^{m} b(i; m, p) \\ & = \lim_{m \rightarrow \infty}[p + (1 - p)]^m \\ & = \lim_{m \rightarrow \infty} 1^m \\ & = 1 \end{aligned}

I know there must be some mistake in the process because if $p = 1$, N must infinite. So the equation only holds when $ p < 1 $. Which step is wrong?

  • 0
    Please: don't write about "infinite trials" when you mean _infinitely many_ trials.2011-09-13

4 Answers 4

1

Let us call $E_{k,n}:=$ probability of winning exactly $k$ times after $n$ trials. Let now $E_k=\lim_{n\to+\infty}E_{k,n}.$

It holds $P(E_k)=\lim_{n\to\infty}P(E_{k,n})=\lim_{n\to+\infty}\binom{n}{k}p^k(1-p)^{n-k}$ Because $E_{k,n}\subseteq E_{k,n+1}$ and of course one has

$0\leq P(E_k)= \lim_{n\to+\infty}\left(\frac{p}{1-p}\right)^k\binom{n}{k}(1-p)^n\leq C(p,k)\lim_{n\to+\infty}n^k(1-p)^n=0.$

Now, the probability you are asking to find is clearly contained in the event $\bigcup_{k=0}^{+\infty}E_k,$ hence, by monotonicity and subadditivity of the probability measure, one has that the probability of winning a finite number of times in an infinite sequence of trials lesser or equal than $\lim_{i\to+\infty}\sum_{k=1}^iP(E_k)=0,$ and so it is $0$.

4

As long as $p > 0$, $N$ will be $\infty$ with probability 1. The first mistake is in $ \sum_{i=1}^\infty b(i;\infty,p) = \lim_{m \to \infty} \sum_{i=1}^m b(i; m,p)$

3

You want to compute the probability of $s$ successes for $s = 0, 1, 2, \ldots$. Here the crucial point is that $s$ is fixed first, and then you compute the probability that you get $s$ successes when you throw infinitely many coins (each of success probability $p$). In other words, we want $ \lim_{m \to \infty} b(s; m, p) = \lim_{m \to \infty} \binom{m}{s} p^s (1-p)^{m-s} = (\frac{p}{1-p})^s \lim_{m \to \infty} \binom{m}{s} (1-p)^m. $ You can intuitively see that this answer should come out to be $0$ (since you are throwing infinitely many coins). How can we justify that rigorously? By upper bounding the function of $m$ suitably, and then using the sandwich theorem.

When $s$ is fixed, the first term $\binom{m}{s}$ is at most a polynomial in $s$, since we can upper bound it loosely by $\binom{m}{s} \leq m^s$. On the other hand, $(1-p)^m$ goes to zero exponentially fast. Can you use this to finish the proof?

2

For $p>0$ it is not true.

One way to show this is that the median of a finite binomial random random variable with $n$ trials is $\lfloor np \rfloor$ or $\lceil np \rceil$, which increases without limit as $n$ increases, so your $\Pr \{ N < \infty \}$ must be no more than $0.5$. It is in fact $0$.