2
$\begingroup$

I'm stuck on answering a problem in the famous Berteskas & Tsitsiklis probability book:

The premise

Imagine a TV game show where each contestant i spins an infinitely calibrated wheel of fortune, which assigns him/her with some real number with a value between 1 and 100. All values are equally likely and the value obtained by each contestant is independent of the value obtained by any other contestant.

Let N be the integer-valued random variable whose value is the index of the first contestant who is assigned a smaller number than contestant 1. As an illustration, if contestant 1 obtains a smaller value than contestants 2 and 3 but contestant 4 has a smaller value than contestant 1 (X4 < X1), then N = 4. Find P(N > n) as a function of n.

What I did for the first part, finding $P(N > n)$, I saw that it was a geometric distribution, with the chance that a number is larger than a given $n$, is $(100-n)/99$, since the random variable of the wheel # is uniformly distributed.

Thus, for the given geometric distribution, if we want $N$-th roll to be the roll where the number is first lesser than the 1st roll, we must give the following probability:

$P(N \gt n) = ((100-n)/99)^{N-1} ((n-1)/99)$ Which is effectively saying we want the first $N-1$ rolls to be greater than the first roll, and the last one to be the complement, or:$1-(100-n)/99 = (n-1)/99$.

So now that we have this, assuming it's correct, I'm stuck on the following:

Find E[N], assuming an infinite number of contestants.

I know that the expected value formula is the following:

$ sum_{x=2}^{\infty} xp_x(x)$ where x is the index of the contestant. However, the contestant's probability also relies on the value of the initial value of the first spin. Is the next logical step to decompose into a double sum and evaluate that? Or is it something completely different?

  • 0
    First, there’s a piece of information missing from the problem: what is the value of $N$ if the first contestant’s spin is the smallest? ($1$ seems reasonable.) In addition, you’re confusing $n$ in $P(N\gt n)$ with the result of the first contestant’s spin. You can see that there’s something wrong with your solution so far by considering the case of only two contestants: by symmetry, $P(n>1)$ must be $1/2$, that is, the second contestant has a smaller spin half of the time, but your solution produces $(1-1)/99=0$.2017-01-03

1 Answers 1

1

There are a few problems with the first part, so I'll show what I have there first.

You're right that the geometric distribution is important here because you can look at players $2,3,\ldots$ as making successive attempts to beat player one's roll and we're interested in the distribution of the number of attempts.

Say the first player rolls $c \in [1,100].$ Then the probability that a player fails to roll smaller than $c$ (i.e. rolls larger than $c$) is $$ 1 -\frac{c-1}{99} $$

The event that $N>n$ occurs whenever contestants $2-n$ fail to roll smaller than c. Since they all roll independently this gives $$ P(N>n \mid c) = \left(1 - \frac{c-1}{99}\right)^{n-1} $$ as the probability that the first contestant that rolls less than player 1 has index greater than n, given player 1 rolled c.

Then to get the unconditional probability we just integrate out c, which is uniformly distributed on [1,100]: $$ P(N>n) = \int dc P(N>n\mid c) P(c) = \int_1^{100}\frac{dc}{99}\left(1 - \frac{c-1}{99}\right)^{n-1} = \frac{1}{n}. $$

There is a faster way to get this. Let $X_1,X_2\ldots$ be the values of the players' rolls. Consider the first $n$ players' rolls. By symmetry, each ordering of the values of the rolls $X_1 < X_2 < X_3 \ldots$ or $X_2 < X_1 < X_4 \ldots$ or whatever the order is, is equally likely. $N>n,$ is equivalent to whenever $X_1$ is the smallest of the first $n$ rolls. There must be a $1/n$ probability by symmetry.

Now that you have $P(N>n)$ you can get $E(N)$ as follows:

We can rewrite the random variable $N$ as $$ N = 1 + \sum_{k =1}^\infty I(N>k) $$ where $I(N>n)$ is $1$ when $N>n$ and $0$ otherwise. This formula becomes obvious if you stare at it for a bit. There sum just counts all of the numbers less than $N$ (of which there are $N-1$) and adds one. Taking the expected value of both sides: $$ E(N) = 1 + \sum_{k=1}^\infty P(N>k) $$ where we used the fact that $E(I(N>n)) = P(N>n).$ Thus your expected value is $$ E(N) = 1 + \sum_{k=1}^\infty \frac{1}{k} = \infty. $$

  • 0
    Hey space! I'm a little bit confused as to how you did the last part, mainly the process of taking expected value of both sides, and also used the fact(that I couldn't get) of $P(N>n) = E(I(N>n))$. Could you give me some hints?2017-01-03
  • 0
    I understood the work that you did up until then, but then I used the standard expectation equation which included the probability of N=n. I got P(N=n) from P(N>n)-P(N>n-1) which I thought was intuitive. Right now don't have pen or paper with me but I'll verify that my method works.2017-01-03
  • 0
    @OneRaynyDay Computing $P(n=N)$ in the way you mentioned and then summing $E(N) = \sum_n nP(N=n)$ should work as well. As for your question on my method, think of how you'd compute the expectation value of $I(N>n),$ the random variable that is 1 whenever $N$ winds up being greater than $n$ and 0 otherwise. You would wind up summing the probabilities of all the outcomes with $N>n$ right? (Cause those would have I(N>n) = 1 while the other outcomes have I(N>n) 0). Does it make sense why $E(I(N>n)) = P(N>n)$?2017-01-03