1
$\begingroup$

Let $T$ and $V$ be independent random variables that are exponentially distributed with rates $\lambda$ and $\mu$. Consider their maximum,

$W = \max(T,V)$

From the answer to a previous post, I know that:

$ \mathbb{E}[W] = \frac{1}{\mu} + \frac{1}{\lambda} - \frac{1}{\lambda + \mu}$

I am wondering if someone explain this result intuitively while referring to the memoryless property of the exponential distribution.

Alternatively, can anyone explain out the flaw in the following chain of reasoning, which seems to be correct but leads me to a wrong result.

$\begin{align} \mathbb{E}[W] &= \mathbb{E}[\max(T,V)|V \leq T]\mathbb{P}[V \leq T] &+& \mathbb{E}[\max(T,V)|T \leq V]\mathbb{P}[T \leq V] \\ &= \mathbb{E}[\max(T,V)|V \leq T]\frac{\mu}{\lambda+\mu} &+& \mathbb{E}[\max(T,V)| T \leq V]\frac{\lambda}{\lambda+\mu} \end{align}$

Note that random variable,

$\max(T,V) \ \big| \ V \leq T$

is identical to the random variable,

$\max(T,V) \ \big| \ V = \min(T,V)$

since if $V$ is less than $T$ then it must be the minimum of the two.

Having said that, I also believe that

$\max(T,V) \ \big| \ V \leq T$

should have the same distribution as the random variable,

$T + \min(T,V)$

which is apparently wrong.

Here the reasoning is that we have to wait $\min(T,V)$ units of time for the minimum of two events to occur, and then another $T$ units of time for $T$ to occur (given that $V$ was the minimum of both events and that the waiting process restarts itself after $\min(T,V)$ units of time due to the memoryless property of the exponential distribution).

  • 0
    @leonbloy Thanks for the suggestion, I re-wrote the problem and tried to makes things clearer. Hope this helps!2012-12-16

2 Answers 2

1

Having said that, I also believe that $\max(T,V) \ \big| \ V \leq T$ should have the same distribution as the random variable, $T + \min(T,V)$ which is apparently wrong.

Of course it's wrong: if you know that $\ V \leq T$, then you know that the maximum is $T$, hence $\max(T,V) \ \big| \ V \leq T$ has the same distribution as $T$.

Edited:


"Of course", the above it's totally wrong. Sorry. Knowing $\ V \leq T$ not only informs us that the maximum is $T$, but also tell us something about the value of the maximum (and it should push it expected value up ).


I think you are confusing two kind of knowledge (condition): which is the minimum, and what is the value of the minimum. If you know that the mininum is the variable $V$, and that its value is $v$ (don't confuse the random variables with their values), then you know that $T\ge v$, and in that case the distribution of the conditioned variable shifts by $v$.

  • 0
    Of course not! It's actually $T|T\ge V$, which is not the same, I'll fix this2012-12-18
1

You were right till: $ \begin{align} \mathbb{E}[W] &= \mathbb{E}[\max(T,V)|V \leq T]\mathbb{P}[V \leq T] &+& \mathbb{E}[\max(T,V)|T \leq V]\mathbb{P}[T \leq V] \\ &= \mathbb{E}[\max(T,V)|V \leq T]\frac{\mu}{\lambda+\mu} &+& \mathbb{E}[\max(T,V)| T \leq V]\frac{\lambda}{\lambda+\mu} \end{align} $ From here on it can solved as: $ \begin{align} \mathbb{E}[W] &= \mathbb{E}[T|\text{An arrival from }\min(T,V)\text{ has occured}]\frac{\mu}{\lambda+\mu} + \mathbb{E}[V|\text{An arrival from }\min(T,V) \text{ has occured}]\frac{\lambda}{\lambda+\mu} \\ &= (\mathbb{E}[T]+\mathbb{E}[\min(T,V)])\frac{\mu}{\lambda+\mu} + (\mathbb{E}[V]+\mathbb{E}[\min(T,V)])\frac{\lambda}{\lambda+\mu} \\ &=(\frac{1}{\lambda}+\frac{1}{\lambda+\mu})\frac{\mu}{\lambda+\mu} + (\frac{1}{\mu}+\frac{1}{\lambda+\mu})\frac{\lambda}{\lambda+\mu} \end{align} $

Note: The weird English used in the above equations is in the context of poisson processes where we can think of T and V to describe first arrival time of independent poisson processes with rate $\mu$ and $\lambda$,respectively. Then, If we consider merging the two processes the resulting process will also be a poisson process with rate $\lambda+\mu$ and $\min(T,V)$ describes the first arrival in the merged process.

Also note that in the second last step above, we have used the memory less property.

Now, doing some rearranging in the above equation, we get: $ \mathbb{E}[W] = \frac{1}{\mu} + \frac{1}{\lambda} - \frac{1}{\lambda + \mu} $