5
$\begingroup$

This is possibly a follow-up question to this one:

different probability spaces for $P(X=k)=\binom{n}{k}p^k\big(1-p\big)^{ n-k}$?

Consider the two models in the title:

  • a fair coin being tossed $n$ times
  • $n$ fair coins being tossed once

and calculate the probability in each model that "head" appear(s) $k~ (0\leq k\leq n)$ times. Then one may come up with the same answer that P(\text{"head" appear(s)} ~k~ \text{times}) = \binom{n}{k}p^k\big(1-p\big)^{n-k}

However, the first one can be regarded as a random process, where the underlying probability space is $\Omega = \{0,1\}$ ($1$ denotes "head" and $0$ for "tail") and the time set $T=\{1,2,\cdots,n\}$. While in the second one, the underlying probability space is $\Omega = \{0,1\}^n$.

Here are my questions:

  • How can I come up with the same formula with these two different points of view?
  • Are these two models essentially the same?
  • 1
    @AndréNicolas When $n$ is very large, the first model assumes that one has a large amount of time to waste tossing the one coin, and the second model assumes that one has a large amount of money ($n$ coins!) and many hands to toss them all simultaneously! :-) – 2011-10-04

2 Answers 2

5

The models are essentially the same. I think this automatically answers your first question as well.

You can see the two as trading a space dimension for a time dimension.

0

Both models are basically a way to put a probability in $\{0,1\}^n$.

Usually you will be given a probability distribution in $\{0,1\}$, and try to extend it to a probability in $\{0,1\}^n$, according to some extra assumption.

If one experiment (tossing a coin) does not influence the other (tossing it again, or tossing another coin), then, you will have the model you describe.

The point is that when you talk about a random process, usually you are allowing that the result of an experiment (toss a coin) might influence the result of the next (toss it again). Changing this condition, you might get a different probability distribution in $\{0,1\}^n$.

For example, it might be assumed that when the outcome is $1$, then the probabilities for the next outcome are flipped, that is $p$ becomes $1-p$. A more concrete example is the probability of a certain letter appearing in a text. After a consonant, it will be likely that the next letter will be a vogue. After a "p", we will not be likely to get a "x" or a "w".