2
$\begingroup$

I've two questions (1) How do I determine the distribution of the first order and the highest order statistic for sample of random size N is taken from the continuous uniform(0, $\theta$) and

\begin{equation} \rm{P} (N = n) = \frac{1}{n! (e − 1)} \text{for n = 1, 2, 3, . . . .} \end{equation}

(2) In general without any distribution of any random variables given, the distribution of highest order statistic is $F_{X(n)}(a) = [F_{X}(a)]^n$ and the distribution of the lowest order statistic is $F_{X(1)}(a) = 1 - [1 - F_X(a)]^n$. I understand this derivation. But what does these values say about the distribution ?

2 Answers 2

2

Didier gave much more than the OP requested. My solution below addresses the original question (and is thus somewhat simpler, in my opinion).

If $Z$ denotes the maximum, then $ {\rm P}(Z \le z) = \sum\limits_{n = 1}^\infty {{\rm P}(Z \le z|N = n){\rm P}(N = n)} = \sum\limits_{n = 1}^\infty {F_{X_{(n)} } (z)\frac{1}{{n!({\rm e} - 1)}}}. $ Since $F_{X_{(n)} } (z) = [F_X (z)]^n = (\frac{z}{\theta })^n$, $0 \leq z \leq \theta$, we get $ {\rm P}(Z \le z) = \frac{1}{{{\rm e} - 1}}\sum\limits_{n = 1}^\infty {\frac{{( z/\theta)^n }}{{n!}}} = \frac{1}{{{\rm e} - 1}}({\rm e}^{z/\theta} - 1). $ Hence $Z$ has density $f_Z$ given by $ f_Z (z) = \frac{1}{{({\rm e} - 1)\theta }}{\rm e}^{ z/\theta}, \;\; 0 < z < \theta. $ Similarly, if $Y$ denotes the minimum, then $ {\rm P}(Y \le y) = \sum\limits_{n = 1}^\infty {{\rm P}(Y \le y|N = n){\rm P}(N = n)} = \sum\limits_{n = 1}^\infty {F_{X_{(1)} } (y)\frac{1}{{n!({\rm e} - 1)}}}. $ Since $F_{X_{(1)} } (y) = 1 - [1-F_X (y)]^n = 1 - [1 - \frac{y}{\theta }]^n$, $ 0 \leq y \leq \theta$, we get $ {\rm P}(Y \le y) = \sum\limits_{n = 1}^\infty {\frac{{1 - [1 - y/\theta ]^n }}{{n!({\rm e}-1)}}} = 1 - \sum\limits_{n = 1}^\infty {\frac{{(1 - y/\theta )^n }}{{n!({\rm e}-1)}}} = 1 - \frac{{{\rm e}^{1 - y/\theta } - 1}}{{{\rm e} - 1}}. $ Hence $Y$ has density $f_Y$ given by $ f_Y (y) = \frac{1}{{({\rm e} - 1)\theta }}{\rm e}^{ 1 - y/\theta}, \;\; 0 < y < \theta. $

  • 0
    @Didier: No offense meant in anyway. I could not explain everything in this answer that made me understand in the comment section so I pointed out one that was helpful. I guess I din't know to explain myself clear enough but anyways, thanks.2011-02-09
6

The idea is to consider simultaneously the minimum $Y$ and the maximum $Z$ of the sample. For $y\le z$, the event $[y\le Y,Z\le z]$ is equivalent to the whole sample being between $y$ and $z$, hence, for a sample of size $n$, $P(y\le Y,Z\le z)$ would be $P(y\le X\le z)^n$. Here one considers a sample of random size $N$, hence $ P(y\le Y,Z\le z)=\sum_{n\ge1}P(N=n)P(y\le X\le z)^n=c(\mathrm{e}^{P(y\le X\le z)}-1), $ with $ c=1/(\mathrm{e}-1). $ To get the joint distribution of $(Y,Z)$, one should differentiate $P(y\le Y,Z\le z)$ twice, with respect to $y$ and $z$, yielding $ P(Y\in\mathrm{d}y,Z\in\mathrm{d}z)=c\mathrm{e}^{P(y\le X\le z)}P(X\in\mathrm{d}y)P(X\in\mathrm{d}z). $ To get the distribution of $Y$ alone is even simpler, one differentiates $P(y\le Y,Z\le z)$ once with respect to $y$ and one lets $z\to+\infty$, hence $ P(Y\in\mathrm{d}y)=c\mathrm{e}^{P(X\ge y)}P(X\in\mathrm{d}y). $ Likewise, to get the distribution of $Z$ alone, one differentiates $P(y\le Y,Z\le z)$ once with respect to $z$ and one lets $y\to-\infty$, hence $ P(Z\in\mathrm{d}z)=c\mathrm{e}^{P(X\le z)}P(X\in\mathrm{d}z). $ In the special case where the sample is uniform on $(0,\theta)$, for $0\le y\le z\le\theta$, the density of the distribution of $(Y,Z)$ at $(y,z)$ is $ f_{Y,Z}(y,z)=(c/\theta^{2})\mathrm{e}^{(z-y)/\theta}. $ Finally, the densities of the distributions of $Y$ and $Z$ on $(0,\theta)$ are $ f_Y(y)=(c\mathrm{e}/\theta)\mathrm{e}^{-y/\theta}, \quad f_Z(z)=(c/\theta)\mathrm{e}^{z/\theta}. $

  • 0
    @Shai: you are right, typo corrected, thanks.2011-02-08