1
$\begingroup$

I'm working on the following problem I got in a hw but I'm stuck. It just asks to find the distribution function of a random variable $X$ on a discrete probability spaces that takes values in $[A,B]$ and for which $Var(X) = \left(\frac{B-A}{2}\right)^{2}.$

I got that this equality gives the expected values $E(X) = \frac{A+B}{2}$ and $E(X^{2}) = \frac{A^{2}+B^{2}}{2}$, but I can't see why this gives a unique distribution (as the statement of the problem suggests).

I also found the distribution function $p(x) = 0$ for $x \in (A,B)$ and $p(A)=\frac{1}{2}$, $p(B)=\frac{1}{2}$ that works for example, but I don't see how this is the only one. Can anyone shed some light please?

Thanks a lot!

  • 0
    If the question asks to find the distribution function of A random variable that blablabla, it's probably because there is not only one. To be quite honest I would be surprised if there was only one.2012-10-07
  • 0
    yes, I figured, but how could you characterize all of them knowing only the expectation and the variance? (I suppose this is what the question asks)2012-10-07
  • 0
    maybe we can do it in this nice case when the variance has maximal value (we know that $Var(X) \leq \left(\frac{B-A}{2} \right)^{2}$ in general...2012-10-07
  • 0
    I don't think you have any idea how ugly random variables can be. Knowing expectation and variance is very little. Now that I think about it maybe there's something to do... but I'm not very optimistic.2012-10-07
  • 1
    @PatrickDaSilva In principle you are right that random variables may be *ugly*, as you call them, but here, there is indeed a unique maximizing distribution, which is the discrete one you found in your answer.2012-10-07
  • 0
    @Dquik You might want to explain how you *got that this equality gives the expected value* E(X)=(A+B)/2, for example.2012-10-07
  • 0
    @did: that followed from the proof of the general $Var(X) \leq \left(\frac{B-A}{2}\right)^{2}$...2012-10-07
  • 0
    @Dquik I gave you no hint (so far), rather I asked for an explanation about the way you prove that E(X) for the optimal X is (A+B)/2 (and to tell you the truth, I find surprising that one is able to do THAT while being unable to prove the whole question--but surely I am missing something).2012-10-07
  • 0
    @Dquik You erased your previous comment, to which mine replied. Unfortunately, your new comment explains nothing at all--unless you now make precise how you prove that the optimal $X$ are such that $E(X)=(A+B)/2$ AND how you prove that Var$(X)\leqslant(B-A)^2/4$ for every $X$. You said you knew how to prove these facts, just show your proof.2012-10-07
  • 0
    @did: so we have $\left(\frac{B-A}{2}\right)^{2} = Var(X) = E(X- E(X))^{2} = E\left( X - \frac{A+B}{2} - ( E(X) - \frac{A+B}{2}) \right)^{2} = E( X - \frac{A+B}{2} )^{2} - (E ( X - \frac{A+B}{2}))^{2} \leq E ( X - \frac{A+B}{2} )^{2} \leq \left(\frac{B-A}{2}\right)^{2}$2012-10-07
  • 0
    @did : I admit I assumed in my answer that $E(X) = (A+B)/2$, otherwise I'm missing something. It wasn't clear if OP's question actually assumed it or had shown it...2012-10-07
  • 0
    @Dquik Right. To complete your proof, see my answer. (May I suggest you append the proof in your last comment to your main post.)2012-10-07
  • 0
    @PatrickDaSilva Yes. The last comment of the OP contains a proof that E(X)=(A+B)/2 hence one may assume this at the onset.2012-10-07
  • 0
    To both of you ; how does the last inequality hold in Dquik's comment? I don't get it.2012-10-07
  • 0
    @PatrickDaSilva See my answer. Or note that for every $A\leqslant x\leqslant B$, one has $-\frac12(B-A)\leqslant x-\frac12(A+B)\leqslant\frac12(B-A)$.2012-10-07
  • 0
    @did : Sure. Stupid question... :P2012-10-07

2 Answers 2

1

Let $m=\frac12(A+B)$ and $h=\frac12(B-A)$. The OP indicates in a comment how to prove that any random variable $X$ with values in $[A,B]$ and such that $\mathrm{Var}(X)=h^2$ is such that $\mathbb E(X)=m$ and $\mathbb E((X-m)^2)=h^2$.

Starting from this point, note that $|X(\omega)-m|\leqslant h$ for every $\omega$ since $A\leqslant X(\omega)\leqslant B$, hence $(X-m)^2\leqslant h^2$ everywhere. Together with the equality $\mathbb E((X-m)^2)=h^2$, this proves that $(X-m)^2=h^2$ almost surely, that is, $X\in\{A,B\}$ almost surely. Now, use once again the fact that $\mathbb E(X)=m$ to deduce that $\mathbb P(X=A)=\mathbb P(X=B)=\frac12$.

Edit: Let us recall why $Y\geqslant0$ almost everywhere and $\mathbb E(Y)=0$ imply that $Y=0$ almost everywhere.

Fix $\varepsilon\gt0$, then $Y\geqslant0$ almost everywhere hence $Y\geqslant\varepsilon\mathbf 1_{Y\geqslant\varepsilon}$ almost everywhere. This implies that $0=\mathbb E(Y)\geqslant\varepsilon\,\mathbb P(Y\geqslant\varepsilon)$, that is, $\mathbb P(Y\geqslant\varepsilon)=0$. Now, $[Y\ne0]=[Y\gt0]$ is the countable union over every positive integer $n$ of the events $[Y\geqslant1/n]$ hence $\mathbb P(Y\ne0)=0$. QED.

  • 0
    Am I wrong saying this is a "non-scaled" version of my proof? It feels like it.2012-10-07
  • 0
    @PatrickDaSilva The idea is similar but the realizations of this idea seem different, even once one omits this scaling/unscaling thing, which is minor. Note for example that the method in my post nowhere assumes that the distribution has a density or is discrete or anything similar. Note also that you assume that the distribution has a density, only to reach the conclusion that it has not... :-) However, as I said, yes, in the end the idea is similar.2012-10-07
  • 0
    Good, we agree =) I felt a little sketchy I admit.2012-10-07
  • 1
    If you ask me, I see no problem with *sketchy* solutions (as long as one recognizes them as such), nor with so-called *stupid* questions (which in my book simply do not exist...).2012-10-07
  • 0
    I meant "trivial question" then. ;)2012-10-07
1

If we simplify the problem and scale things appropriately, you're essentially looking for a distribution function $f(x)$ such that $$ \int_{-1}^{1} f(x) \, dx = 1, \quad \int_{-1}^1 x f(x) \, dx = 0, \quad \int_{-1}^1 x^2 f(x) \, dx = 1. $$ Let us look only at the first two integrals first, so that we only look at variables $X$ whose $f$ satisfy $$ \int_{-1}^1 f(x) \, dx = 1, \quad \int_{-1}^1 x^2 f(x) \, dx = 1. $$ This implies that $$ \int_{-1}^1 (1 - x^2) f(x) \, dx = 0. $$ Notice that $1-x^2$ is positive and $f$ is a positive measure on $[-1,1]$ for which $\int_{-1}^1 f = 1$. Consider $\varepsilon > 0$ and $A_{\varepsilon} = [-1+\varepsilon, 1 - \varepsilon]$. Assume that $\int_{A_{\varepsilon}} f(x) \, dx > 0$. Therefore $$ 0 = \int_{-1}^1 (1-x^2)f(x) \, dx \ge \int_{A_{\varepsilon}} (1-x^2) f(x) \, dx \ge \int_{A_\varepsilon} (1-(1-\varepsilon)^2) f(x) \, dx > 0 $$ because the last integral is a constant times the integral over $A_{\varepsilon}$ of $f$. This contradiction shows that $\int_{A_{\varepsilon}} f(x) \, dx = 0$ for every $\varepsilon > 0$.

Since probability measures are continuous, we can let $\varepsilon \to 0$ and know that $p(]-1,1[) = 0$. This means $p( \{-1,1\} ) = 1$ by taking complements. Since the expectation has to be $0$, the variable has to be symmetric, i.e. $p(-1) = p(1) = \frac 12$.

Hope that helps,

  • 0
    This answer is probably better off being a comment, but at least it makes me a little bit more optimistic that your random variable might be unique. I could "argue that all the weights have to be on the endpoints" using measure theory a little better but I didn't know if you knew measure theory. It wouldn't be that hard.2012-10-07
  • 0
    I forgot to say that the probability space is discrete, sorry2012-10-07
  • 0
    anyway, please elaborate more, I'm not sure I understood the argument anyway (you can use measure theory)2012-10-07
  • 0
    @Dquik : Probability space is discrete? That changes a lot!2012-10-07
  • 0
    @Dquik : I actually didn't need to assume the space was discrete! There literally is only one variable with such properties. I just needed to go through my assumptions a little bit more in detail. The whole proof is up there. Feel free to ask if you have any questions.2012-10-07
  • 0
    I understand the proof now, but I'm kinda doubtful about how you rescaled it to get the $-1$ and $1$ range... can you elabore there too? thanks2012-10-07
  • 0
    @Dquik : If you don't trust my scaling, just go through the exact same arguments replacing the bounds $-1$ and $1$ by $A$ and $B$. You'll just have to take care of where $(A+B)/2$ and $(B-A)/2$ shows up but the steps work the same.2012-10-07