I'm trying to model the time between successive events in a sequence of events.
Let $T_i$ ($i=1,2,\ldots$) be the time between event $i$ and $i+1$. Assume that the $T_i$ are independent and identically distributed and have the expected value $D$.
Let $\tau_i=\sum_{k=1}^{i-1} T_k$ be the time of occurrence of event $i$, with $\tau_1$ defined as $0$.
I'm interested in the expected average number of events per period (equation 1): $E\left(\lim_{T \to \infty} \frac{1}{T}\sum_{i=1}^\infty I(\tau_i \leq T)\right)$ where $I(\tau_i \leq T)$ is an indicator variable which is $1$ when $\tau_i \leq T$ and $0$ otherwise.
It seems to be that this should just equal the inverse of the expected duration between events, i.e. $1/E(T_i)$
One approach which I took to proving this is to consider the average number of events per period only at time points at which some event has occurred. If I take $T=\tau_i$ in the expression of which the limit is taken in equation(1), we would have (equation 2): $\frac{i}{\tau_i}$
Taking the limit as $i \to \infty$, the strong law of large numbers would make this equal $1/D$ on almost all paths. Would this be enough to show that its expectation would also be $1/D$?
Also how do I prove that I get the same average number of events per period for each sample path regardless of whether I take the limit in the definition of the average over all points of time or only over points of time at which an event occurs? If there had been an upper bound on $T_i$ this would have been trivial, but are there more general conditions under which I can make this move (for example, would finite variance of $T_i$ work)?
If my approach does not work is there some other way of proving the relation in the question title?