3
$\begingroup$

This is particularly directed at those who have Grimmett & Stirzaker, Probability and random processes (2005), at hand. It pertains to the proof step prior to equation (10), p. 166. For others:

  • $X_i$ are i.i.d. integer-valued random variables with $\mathbb{P}(X_i \leq 1) = 1$ and $\mathbb{P}(X_i = 1) > 0$.
  • $T_b=\min\{n:\sum_{i=1}^n X_i = b\}>0$ is the first hitting time of the point $b$.
  • $G(z) = \mathbb{E}\left(z^{-X_1}\right) = \sum_{n=-\infty}^1 z^{-n} \mathbb{P}(X_1=n)$
  • $F_b(z) = \mathbb{E}\left(z^{T_b}\right) = \sum_{n=0}^\infty z^n \mathbb{P}(T_b=n)$
  • equation (9): $F_b(z) = F_1(z)^b$ for $b \geq 1$

I'm not understanding how to prove $$ \mathbb{E}(\mathbb{E}(z^{T_1}|X_1)) = \mathbb{E}(z^{1+T_{1-X_1}}) = z \mathbb{E}\left(F_{1-X_1}(z)\right) = z \mathbb{E}\left(F_1(z)^{1-X_1}\right) = z F_1(z)G(F_1(z)) $$

In J. G. Wendel, "Left-continuous random walk and the Lagrange expansion" (1975), he argues in essence that $$ \begin{align*} \mathbb{E}(\mathbb{E}(z^{T_1}|X_1)) & = \sum_{n=-1}^\infty \mathbb{E}(z^{T_1}|X_1=-n)\ \mathbb{P}(X_1=-n) \\ & = z \mathbb{P}(X_1=1) + \sum_{n=0}^\infty \mathbb{E}(z^{T_1}|X_1=-n)\ \mathbb{P}(X_1=-n) & T_1=1 \Leftrightarrow X_1=1 \\ & = z \mathbb{P}(X_1=1) + \sum_{n=0}^\infty \mathbb{E}(z^{1+T_{1+n}})\ \mathbb{P}(X_1=-n) & \textrm{homogeneity} \\ & = z \mathbb{P}(X_1=1) + z \sum_{n=0}^\infty F_{1+n}(z)\ \mathbb{P}(X_1=-n) & \textrm{definition of }F \\ & = z \mathbb{P}(X_1=1) + z \sum_{n=0}^\infty F_1(z)^{1+n}\ \mathbb{P}(X_1=-n) & \textrm{equation (9)} \\ & = z F_1(z) \sum_{n=-1}^\infty F_1(z)^n\ \mathbb{P}(X_1=-n) \\ & = z F_1(z) G(F_1(z)) & \textrm{definition of }G \end{align*} $$

In the second line, $n=-1$ is treated specially, because the temporal and spatial homogeneity assumptions don't apply to it. Specifically, the further time required to hit 1 is 0; applying the homogeneity assumptions would imply $T_0=0$, which isn't allowed, hitting times are positive.

In the fifth line, applying equation (9), you can see that $n=-1$ needs to be treated specially there as well.

So although $\mathbb{E}(\mathbb{E}(z^{T_1}|X_1)) = z F_1(z)G(F_1(z))$, it seems that Grimmett's intermediate steps are nonsensical. Or can you see his logic?


Incidentally, $$ \mathbb{E}\left(F_1(z)^{1-X_1}\right) = \sum_{n=0}^\infty F_1(z)^n\ \mathbb{P}(1-X_1=n) = \sum_{n=-1}^\infty F_1(z)^{1+n}\ \mathbb{P}(X_1=-n) = F_1(z)G(F_1(z)) $$ so the subsequent application of Lagrange's inversion formula on p. 166 is still done correctly.

  • 1
    Thanks for pointing this out, Mike. There is indeed potential for confusion over the definition of $T_0$. We need $T_0=0$ at this stage (but not earlier nor later), and we are adding a note in the next reprint.2012-07-03
  • 0
    @Geoffrey: Hello, and welcome to math.SE! I have converted your answer to a comment. Because you do not have 50 reputation points yet, [you are only able to comment on your own questions and answers](http://meta.stackexchange.com/questions/19756/how-do-comments-work/19757#19757), so you were forced to post an answer; however, the "add comment" button will appear for you once you gain 50 points. Here is an [explanation of reputation points](http://meta.stackexchange.com/questions/7237/how-does-reputation-work/7238#7238).2012-07-03

1 Answers 1

1

The key is that one defines $T_0=0$. Thus, $\mathrm E(z^{T_1}\mid X_1=1)=z$ is also $\mathrm E(z^{1+T_0})$ and, for every $k\leqslant0$, $\mathrm E(z^{T_1}\mid X_1=k)=\mathrm E(z^{1+T_{1-k}})$. Likewise, $F_0(z)=\mathrm E(z^{T_0})=1$, hence equation (9) holds for $b=0$ as well, and the computation of $\mathrm E(z^{T_1})$ which you recall becomes direct.

  • 0
    You can't define $T_0=0$. It has a meaning already: the time that the walk returns to the origin. I'll add Grimmett's definition above.2012-02-29
  • 0
    Nevertheless this is the (only) way to make true G&S computations. In other words, if one sets $T_0=\inf\{n\geqslant1\mid S_n=0\}$, the line you try to understand is false but if one sets $T_0=\inf\{n\geqslant0\mid S_n=0\}$, that is $T_0=0$, this line holds. (Here, $S_0=0$ and $S_n=X_1+\cdots+X_n$.)2012-02-29
  • 0
    By the way, you might wish to indicate where in the book G&S define $T_0$ in relation to equation (10) the way you say they do.2012-03-02
  • 0
    You raise a good point. Middle of p. 164, $T_r$ is defined. But $S_n$ is defined on p. 162, and there $n=0$ would be an element of the sequence $S$. But that confuses the definition of $S_n=\sum_{i=1}^n X_i$. But in support of my interpretation of $T_0$, bottom of p. 162, "the random time $T_0$ until the particle makes its first return to the origin".2012-03-02
  • 0
    Then again, the definition of $T_0$ on p. 162 may be overridden by the one on p. 164. Hard to tell. It's all just a convention that needs clarification, and I've brought this post to Grimmett's attention.2012-03-02
  • 0
    Quite easy to tell, I would say. The answer is already there, just before your eyes. :-)2012-03-03