3
$\begingroup$

I was trying to solve the following exercise:

Suppose that the sequence of independent events $\{A_i\}$ satisfies $$\phi(n) = \sum_{i=1}^{n} P(A_i) \rightarrow \infty \text{ as } n \rightarrow \infty$$ and let $\tau_k :=\min\{n | \sum_{i=1}^{n} 1(A_i) = k\}$. By applying Doob's stopping time theorem to an appropriate martingale, prove the more quantitative result that $$E[\phi(\tau_k)] = k \text{ for all } k \geq 1$$

Here, $1(\cdot)$ is the indicator function. By Doob's stopping theorem, I think the book is referring to the following theorem (which is just named "stopping time theorem"):

If $(M_n)$ is a martingale with respect to $(\mathcal{F}_n)$, then the stopped process $(M_{\tau \wedge n})$ is also a martingale with respect to $(\mathcal{F}_n)$.

First of all, let me say that I think my proof is wrong. I don't use the stopping time theorem or the fact that $\phi(n)$ diverges for $n \rightarrow \infty$ and it just looks too easy to be true. In any case, I'd welcome hints and constructive criticism.

My attempt at a proof:

Since $P(A) = E[1(A)]$, we can write $$\phi(\tau_k) = \sum_{i=1}^{\tau_k} P(A_i) = \sum_{i=1}^{\tau_k} E[1(A_i)]$$ Now, $\tau_k$ is the smallest $n$ such that from the sequence $\{A_1, A_2, \ldots, A_n\}$, exactly $k$ of them occur. So $$\sum_{i=1}^{\tau_k} E[1(A_i)] = \sum_{j=1}^{k} E[1(A_{i_j})]$$ for indices $i_j$. Since, by definition of $\tau_k$, we know that the events $A_{i_j}$ occur for $j = 1, \ldots, k$, we have that $$\phi(\tau_k) = \underbrace{1 + 1 + \ldots + 1}_{k} = k$$ and taking the expectation of both sides gives $$E[\phi(\tau_k)] = E[k] = k$$

  • 0
    $\phi (\tau _k )$ is a (random) sum of probabilities, not $1$'s.2011-03-15
  • 0
    @Shai Covo: but isn't $E[1(A_{i_j})] = 1$ for all $j=1,\ldots,k$?2011-03-15
  • 0
    Note that if $P(A_i) = p$, then $\phi (\tau _k ) = p\tau _k$, which is not integer-valued in general.2011-03-15
  • 1
    Define a martingale $(X_n: n \geq 1)$ by $X_n = \sum\nolimits_{i = 1}^n {[{\mathbf 1}(A_i ) - P(A_i )]}$. Consider the optional stopping theorem, as formulated in Wikipedia. You want $\tau_k$ to be a stopping time with respect to $X_1,X_2,\ldots$, that is you want to show that the event $\{ \tau_k = n \}$ is completely determined by the total information known up to time n, $\{X_1,\ldots,X_n \}$. Next, you want ${\rm E}(\tau_k ) < \infty $. (The condition ${\rm E}|X_{i + 1} - X_i | \le c$ is clearly satisfied, with $c=1$.) However,...2011-03-16
  • 2
    However, while the condition $\phi(n) \to \infty$, that is $\sum\nolimits_{i = 1}^\infty {P(A_i )} = \infty $, implies that with probability $1$ infinitely many of the events $A_i$ occur, and hence that $\tau_k$ is almost surely finite, ${\rm E}(\tau_k )$ might be infinite, and hence the standard optional stopping theorem cannot be applied here. For example, let $P(A_i)=1/(i+1)$, and note that ${\rm E}(\tau_1 ) = \sum\nolimits_{i = 1}^\infty {1/(i + 1)} = \infty$.2011-03-16
  • 0
    @Shai Covo: +1 for your "However" comment, which I had missed2011-03-16
  • 0
    @Henry: Thanks. It was natural to expect a simple application of the optional stopping theorem...2011-03-16

1 Answers 1

1
  1. Doob's stopping theorem is probably intended to mean the optional stopping theorem, named in honour of Joseph Leo Doob.
  2. You implicitly use $\phi(n) \rightarrow \infty$ since if it had some finite upper bound $L$ then for $k>L$ you would not be able to have $\phi(\tau_k) = k$.
  3. When you say $\sum_{i=1}^{\tau_k} E[1(A_i)] = \sum_{j=1}^{k} E[1(A_{i_j})]$, it is unclear whether you are using $A_i$ to mean unknown future events at the start or known past events at the stopping time. It does not matter which because of the optional stopping theorem as applied to martingales (meeting some additional conditions which are satisfied in this case), namely that the expected value of such a martingale at a stopping time is equal to its initial value. I would expect you to find something like $E[X_{\tau \wedge n}]=E[X_0]$ or $E[X_{\tau}]=E[X_0]$ somewhere in your book.
  • 0
    regarding your third comment, I don't quite see how we use any of that in the exercise. Could you elaborate?2011-03-15
  • 0
    If you have not used it then all you have proved is that at the stopping time $\tau_k$, $k$ events have occurred (so each with retrospective probability 1) and $\tau_k - k$ have not (each with retrospective probability 0), so the expectation of the sum of their retrospective probabilities is $k$. That should not be a surprise. The optional stopping theorem lets you take this back to the start and say that the expectation of the sum of their prospective probabilities is the same $k$, even though you do not know which will occur or even when the stopping time will be.2011-03-15
  • 0
    could you show how you would do such a thing?2011-03-16
  • 0
    How I would do what?2011-03-16
  • 0
    I guess I just don't understand stopping times well enough to see how the theory naturally fits into solving this problem. I'll keep trying. Thanks for your time.2011-03-16