2
$\begingroup$

I got this problem within others as homework, and I don't know how to do it. Does anyone know how to start solving it? Thank you! Any help would be appreciated. Sorry for my writing in LaTex because for some reason I can't install mathjax, and so what I see is in Latex format/display. Thanks for understanding.

Let x_1 and x_2 be independent exponentially distributed random variables with parameters $\lambda_1$ and $\lambda_2$ so that $Pr{X_i>t}=e^{-\lambda_it}$ for $t>=0$. Let $N=1$ if $X_1, and $N=2$ if $X_2<=X_1$, $U=\min\{X_1, X_2\}=X_N$, $V=\max\{X_1,X_2\}$ and $W=V-U=|X_1-X_2|$. Show: (a). $Pr\{N=1\}=\frac{\lambda_1}{\lambda_1+\lambda_2}$ and $Pr\{N=2\}=\frac{\lambda_2}{\lambda_1+\lambda_2}$

(b). $Pr\{U>t\}=e^{-(\lambda_1+\lambda_2)t}$ for $t>=0$

(c). $N$ and $U$ are independent random variables

(d). $Pr\{W>t|N=1\}=e^{-\lambda_2t}$ and $Pr\{W>t|N=2\}=e^{-\lambda_1t}$ for $t>=0$

(e). $U$ and $W=V-U$ are independent random variables.

2 Answers 2

3

For a), use the law of total probability: $ {\rm P}(X_1 < X_2 ) = \int_0^\infty {{\rm P}(X_1 < X_2 |X_2 = t)f_{X_2 } (t)\,{\rm d}t} , $ where $f_{X_2}$ is the PDF of $X_2$.

For b), notice that $\min \{ X_1 ,X_2 \} > t$ if and only if $X_1 > t$ and $X_2 > t$ (and use the fact that $X_1$ and $X_2$ are independent).

For c), calculate ${\rm P}(\min \{X_1,X_2 \}>t , X_1 > X_2 )$ using the law of total probability, conditioning on $X_2$. You should easily find that $ {\rm P}(\min \{X_1,X_2 \}>t , X_1 > X_2 ) = {\rm P}(\min \{X_1,X_2 \}>t ){\rm P}(X_1 > X_2 ) = \frac{{\lambda _2 }}{{\lambda _1 + \lambda _2 }}e^{ - (\lambda _1 + \lambda _2 )t}. $

For d), note that $ {\rm P}(|X_1 - X_2 | > t|N = 1) = \frac{{{\rm P}(X_2 - X_1 > t,X_1 < X_2 )}}{{{\rm P}(X_1 < X_2 )}} = \frac{{{\rm P}(X_2 > X_1 + t)}}{{{\rm P}(X_1 < X_2 )}}, $ and you should easily show using the law of total probability, conditioning on $X_1$, that $ {P(X_2 > X_1 + t)} = \frac{{\lambda _1 }}{{\lambda _1 + \lambda _2 }}e^{ - \lambda _2 t}. $ Note: The calculation for ${\rm P}(|X_1 - X_2 | > t|N = 2)$ is completely analogous.

NOTE: Since question e) is not so easy, I give more than hints. However, try solving a significant part of it by yourself.

For e), it is straightforward to show, using that $N$ and $U$ are independent, that $ {\rm P}(W > t | U=u) = {\rm P}(W > t | N=1, U=u){\rm P}(N=1) + {\rm P}(W > t | N=2, U=u){\rm P}(N=2). $ For this purpose, you may replace $U=u$ by $U \in [u,u+{\rm d}u]$, where ${\rm d}u \to 0$, in order to condition on events with positive probability. Now, given $U=u$ and $N=1$, we have that $X_1 = u$ and that $X_2 - X_1$, by standard property of the exponential distribution, is exponential$(\lambda_2)$. Analogously, given $U=u$ and $N=2$, we have that $X_2 = u$ and that $X_1 - X_2$ is exponential$(\lambda_1)$. From this you should find that $ {\rm P}(W > t | U=u) = \frac{{\lambda _1 }}{{\lambda _1 + \lambda _2 }}e^{ - \lambda _2 t} + \frac{{\lambda _2 }}{{\lambda _1 + \lambda _2 }}e^{ - \lambda _1 t} . $ Now we are done by $ {\rm P}(W > t ) = {\rm P}(W > t ,N = 1) + {\rm P}(W > t ,N = 2), $ as it gives us, by virtue of a) and d), $ {\rm P}(W > t) = \frac{{\lambda _1 }}{{\lambda _1 + \lambda _2 }}e^{ - \lambda _2 t} + \frac{{\lambda _2 }}{{\lambda _1 + \lambda _2 }}e^{ - \lambda _1 t} . $

  • 1
    And then try showing that $P(X = j) = \frac{{e^{ - k} }}{{j!}}(\frac{p}{{1 - p}})^j \sum\limits_{i = 0}^\infty {\frac{1}{{i!}}(1 - p)^{i + j} k^{i + j} }$. This should lead you straight to the result you mentioned.2011-02-23
2

This is to show that one can solve (e) and similar questions in a fully automatized way.

One is asked to show that $U$ and $W$ are independent, let us be more ambitious and try to compute the distribution of $(U,W)$. If this distribution is a product, we are done.

Question How to compute the distribution of any random variable $Z$?
Answer By writing $E(\varphi(Z))$ as the integral of $\varphi$ with respect to a measure $\mu$, for every (bounded measurable) function $\varphi$. Then $\mu$ is the distribution of $Z$.

And this is actually often quite easy to do...

Let us see what happens for $Z=(U,W)$. The first step is to replace $Z$ by a function of $(X_1,X_2)$, in the case at hand, $ \varphi(Z)=\varphi(\min\{X_1,X_2\},\max\{X_1,X_2\}-\min\{X_1,X_2\}). $ Now the RHS is a (quite ugly) function of $(X_1,X_2)$ but this does not matter. The only important thing is that the distribution of $(X_1,X_2)$ has density $f_1(x_1)f_2(x_2)$, hence, like for any function of $(X_1,X_2)$, by definition of the distribution of $(X_1,X_2)$, one has $ E(\varphi(Z))=\int\varphi(\min\{x_1,x_2\},\max\{x_1,x_2\}-\min\{x_1,x_2\})f_1(x_1)f_2(x_2)\mathrm{d}x_1\mathrm{d}x_2. $ Up to this point, everything is general. Now one begins to use the max/min thing. This forces us to decompose the integral into two parts, one for the domain where $x_1\le x_2$ and the other for the domain $x_1>x_2$. This decomposition yields $E(\varphi(Z))$ as $(*)+(**)$ with $ (*)=\int\varphi(x_1,x_2-x_1)f_1(x_1)f_2(x_2)\mathbf{1}_{x_1 and $ (**)=\int\varphi(x_2,x_1-x_2)f_1(x_1)f_2(x_2)\mathbf{1}_{x_1>x_2}\mathrm{d}x_1\mathrm{d}x_2. $ Recall that our goal is to write $E(\varphi(Z))$ as $ (o)=\int\varphi(u,w)\mathrm{d}\mu(u,w), $ for a given measure $\mu$. Let us rewrite $(*)$ and $(**)$ with this goal in mind. The changes of variables $[u=x_1,w=x_2-x_1]$ in $(*)$ and $[u=x_2,w=x_1-x_2]$ in $(**)$ lead to $ (*)=\int\varphi(u,w)f_1(u)f_2(u+w)\mathbf{1}_{w>0}\mathrm{d}u\mathrm{d}w, $ and to $ (**)=\int\varphi(u,w)f_1(u+w)f_2(u)\mathbf{1}_{w>0}\mathrm{d}u\mathrm{d}w. $ Comparing $(o)$ with $(*)+(**)$, one sees that the only way to make them equal for every $\varphi$ is that $\mathrm{d}\mu(u,w)=g(u,w)\mathrm{d}u\mathrm{d}w$ with $ g(u,w)=[f_1(u)f_2(u+w)+f_1(u+w)f_2(u)]\mathbf{1}_{w>0}. $ We are done and two things are to be noted: first, all these steps are fully automatic, and second, the formula for $g$ is valid for any $(U,W)$ based on independent $X_1$ and $X_2$ with densities $f_1$ and $f_2$.

In the case at hand, $f_i(x)=\lambda_i\mathrm{e}^{-\lambda_i x}$ for $x>0$, hence, for every $u>0$ and $w>0$, $ g(u,w)=\lambda_1\lambda_2\mathrm{e}^{-(\lambda_1+\lambda_2)u}[\mathrm{e}^{-\lambda_1w}+\mathrm{e}^{-\lambda_2w}]. $ The function $g(u,w)$ is a product $g_1(u)g_2(w)$ hence $(U,W)$ is independent.

And naturally, this proves simultaneously that the functions $g_1$ and $g_2$ are the densities of the distributions of $U$ and $W$, up to multiplicative positive constants.