I know that the expected value of a geometrically distributed random variable is $\frac1p$ but how do we get there. This is what I got so far: $$\sum_{x=1}^\infty xP(X=x)$$ where X is the number of failures until first success. Since it's geometric we have:$$\begin{align} \sum_{x=1}^\infty xp(1-p)^{x-1}\\ \frac{p}{1-p} \sum_{x=1}^\infty x(1-p)^x\\ .... \end{align}$$ How do we sum that?
Where am I going wrong with this proof of expected value of a geometric random variable?
-
4Differentiate a well-known series. – 2012-11-12
2 Answers
Set $r=1-p$ and recall geometric series formula $$ \sum\limits_{x=1}^\infty r^x=\frac{r}{1-r} $$ Then $$ \sum\limits_{x=1}^\infty x r^x= r\sum\limits_{x=1}^\infty x r^{x-1}= r\sum\limits_{x=1}^\infty \frac{d}{dr} r^x= r \frac{d}{dr}\sum\limits_{x=1}^\infty r^x= r\frac{d}{dr}\frac{r}{1-r} $$ Is the rest clear?
-
0then wouldn't we have $\sum_{x=1}^\infty xr^x$? How do we take care of the extra x? – 2012-11-12
-
1Oh sorry, i'll fix it – 2012-11-12
-
0I'm a little confused isn't r a constant ? if so then the derivative w.r to x will be r^x * ln r, right ? – 2017-10-04
-
0@Oleg you can regard these equalities as r is a variable. – 2017-10-04
An experiment has probability of success $p\gt 0$, and probability of failure $1-p$. We repeat the experiment until the first success. Let $X$ be the total number of trials. We want $E(X)$.
Do the experiment once. So we have used $1$ trial. If this trial results in success (probability: $p$), then the expected number of further trials is $0$. If the first trial results in failure (probability: $1-p$), our experiment has been wasted, and the expected number of trials remains at $E(X)$. Thus $$E(X)=1+(p)(0)+(1-p)E(X).$$ If $E(X)$ exists, we can solve for $E(X)$ and obtain $E(X)=\dfrac{1}{p}$.
-
0I like the insight you made, but can you tell me why 1 isn't multiplied by $p$? since $E[X]= \displaystyle\sum_{k=0}^{\infty} x(Pr(X=k))$? – 2013-12-18
-
2We could do it your way. With probability $p$ we get $X=1$ (giving a term $p$, or with probability $1-p$, $E(X)=1+E(X)$. That gives $E(X)=p+(1-p)(1+E(X))$, giving exactly the same thing for $E(X)$ as was obtained in (I think) a marginally simpler way. – 2013-12-18
-
0let me try to see if I follow with what you are doing. Are you evaluating the summation by parts? meaning $\displaystyle\sum_{k=1}^1 1(Pr(X=1))?$ and than representing the other half of the sum as E[X]? – 2013-12-18
-
0The argument uses **conditional expectation**, conditioning on the first outcome. In this example, it can also be viewed as a summation trick or technique. – 2013-12-18
-
0so its the probability of success p + the probability of failure 1-p times the expected value if we fail the 1st trial 1 + E [X]. if I got the logic down. – 2013-12-18
-
0can we use the same logic to show that for the first success to occur it is p/q? – 2013-12-18
-
0You my prefer to express the equation in my post as $E(X)=(p)(1)+(1-p)(1+E(X))$. Then the conditional expectation nature will be clearer. By the way, my $X$ is not the same as OP's, my $X$ is the number of trials until and including the first success. If we want the mean number of **failures** before first success, the answer is $\frac{q}{p}$. – 2013-12-18