1
$\begingroup$

Suppose that events occur according to a Poisson process with rate $\lambda$, so that for every $t > 0$, the number of occurrences $N(t)$ in the time interval $[0,t]$ has a Poisson distribution with parameter $\lambda t$. Let $T_n$ be the waiting time to the occurrence of the $n$th event. Show that $T_n$ has a gamma distribution with parameters $(n, \lambda)$.

$F(t)=1-P(T_n > t)=1-P(N(t)\ge n-1)=1-\sum_{i=0}^{n-1}\frac{e^{-\lambda t}(\lambda t)^i}{i!}$. I have to prove that the derivative of this expression is equal to $\frac{\lambda e^{-\lambda t}(\lambda t)^{n-1}}{\Gamma(n)}$.

How to do it?

  • 0
    One of the inequalities must must change. The expression $F(t)$, the probability that the waiting time $T_n$ was below or equal to $t$ is equal to the probability that at time $t$ the number of occurrences is $n$ or larger (such that the $n$-th event must have occurred before $t$). Hence P(N(t)>n) = 1-P(N(t) \leq n-1) and not $1-P(N(t)\geq n-1)$ @amWhy2018-08-31

2 Answers 2

6

Well, $ \frac{\mathrm{d}}{\mathrm{d} t} \frac{\mathrm{e}^{-\lambda t}(\lambda t)^i}{i!} = -\lambda \frac{\mathrm{e}^{-\lambda t}(\lambda t)^i}{i!} + \lambda i \frac{\mathrm{e}^{-\lambda t}(\lambda t)^{i-1}}{i!} = \lambda\left(- \frac{\mathrm{e}^{-\lambda t}(\lambda t)^i}{i!} + \frac{\mathrm{e}^{-\lambda t}(\lambda t)^{i-1}}{(i-1)!} \right) $ Therefore: $ \frac{\mathrm{d}}{\mathrm{d} t} \sum_{i=0}^{n-1} \frac{\mathrm{e}^{-\lambda t}(\lambda t)^i}{i!} = -\lambda \sum_{i=0}^{n-1} \left(- \frac{\mathrm{e}^{-\lambda t}(\lambda t)^i}{i!} + \frac{\mathrm{e}^{-\lambda t}(\lambda t)^{i-1}}{(i-1)!} \right) = -\lambda \left( - \frac{\mathrm{e}^{-\lambda t}(\lambda t)^{n-1}}{(n-1)!} \right) $ Because $\sum_{i=0}^{n-1} (f(i)-f(i-1)) = f(n-1) - f(-1)$ for any $f$.

2

A different approach that also uses a Poisson distribution with parameter $\lambda t$ would be:

$P(t < T_n < t+dt) = \underbrace{ P(N(t) = n-1)}_{\substack{\text{the probability for }\\\text{$(n-1)$ arrivals at time $t$}}} \qquad \qquad \times \qquad \underbrace{\vphantom{P(N(t) = n-1)}\lambda dt}_{\substack{\text{the probability for }\\\text{arrival between time $t$ and $t+dt$}}} $

leading to

$P(t < T_n < t+dt)= \text{Poisson}(n-1,\lambda t) \times \lambda dt = \lambda\frac{(\lambda t)^{n-1} e^{-\lambda t}}{(n-1)!} dt = \frac{\lambda e^{-\lambda t}(\lambda t)^{n-1} }{\Gamma(n)} dt$


The connection with your approach is as following:

You could view the derivative as $\frac{d}{dt} P(N(t)

and

$\underbrace{{\frac{d}{dt} P(N(t)=k)}}_{\text{change of k}} = \underbrace{\lambda P(N(t)=k-1)\vphantom{\frac{d}{dt} P(N(t)=k)}}_{\text{gain from k-1 to k}} -\underbrace{\lambda P(N(t)=k)\vphantom{\frac{d}{dt} P(N(t)=k)}}_{\text{loss from k to k+1}}$

and all those terms cancel (much like the answer of Sasha):

$\frac{d}{dt} P(N(t)

So the the derivative of $P(N(t) < n)$ the rate at which you are surpassing the 'level' $n-1$ is equal to how many there is currently at the 'level' $n-1$ and how fast those will rise to the 'level' $n$. What happens at lower 'levels' does not matter for the change of $P(N(t) < n)$.


Written by StackExchangeStrike