2
$\begingroup$

$X, Y$ are continuous random variables taking values in $[0,1]$. Their joint density is $f(x, y) = x+y$ when $x, y \in [0,1]$, and $f(x,y) = 0$ otherwise.

Is it possible to conclude whether or not $X, Y$ are independent from just this definition without performing any calculations or mathematical manipulation? i.e. just by inspection of the joint pdf. If so, how?

enter image description here

3 Answers 3

2

The following lemma can be useful for such problems.

Lemma: A function $f:\mathbb{R}^2\to \mathbb{R}$ of two variables can be expressed as a product $f(x,y)=g(x)h(y)$ if and only if $f(x,y)f(x^*,y^*)=f(x,y^*)f(x^*,y)$ for any $x,y,x^*,y^*$ in $\mathbb{R}^2$.

Taking $x=y=0$ and $x^*=y^*=1$ we have $f(x,y)f(x^*,y^*)=0$ but $f(x,y^*)f(x^*,y)=1$, which shows that your $f$ cannot be written as a product function.

  • 0
    What does the asterisk stand for?2017-02-28
  • 0
    Nothing. I just want 4 different variable names. You could use $w,x,y,z$ if you prefer.2017-02-28
  • 0
    And the 0 and 1 values, do they follow any pattern?2017-02-28
  • 2
    If you want to disprove a general equation, simply pick some values and see if the left hand side and the right hand side are equal. The zero and one were just convenient choices.2017-02-28
  • 0
    Thanks for this. In your lemma, how do you prove the <-- direction of the iff?2017-02-28
  • 1
    @DanielGoldbach Fix $x^*, y^*$ so that $f(x^*, y^*)>0$ and divide by $f(x^*,y^*)$.2017-02-28
1

For the independence of these two random variables it is necessary that the joint density function can be decomposed into the product of the densities of these random variables. One can see that $x+y$ cannot be expressed in the form $x+y=f(x)g(y)$ for all $x, y \in [0,\,1]$.

  • 0
    One can see it, but that doesn't remove the need for a proof.2017-02-28
  • 0
    @ClementC.Nothing to prove here: $x+y=f(x)g(y)$ implies, e.g., $0=0+0=f(0)g(0)$. It is possible only if $f(0)=0$ or $g(0)=0$. But $y=0+y=f(0)g(y)$ for all $y$. Then $f(0)\neq 0$. Similarly, $x=x+0=f(x)g(0)$ for all $x$, and then $g(0)\neq 0$. Contradiction. Moreover, $x$ and $y$ participate symmetrically, therefore if such $f$ and $g$ exists, they should coincide. But on the diagonal $x+x=f(x)f(x)$, and then $f(x)=\sqrt{2x}$. Substitute it to equation get contradiction again: there are $x,y\in [0,1]$ such that $x+y\neq \sqrt{2x}+\sqrt{2y}$.2017-02-28
  • 0
    What you have just written *is* a proof.2017-02-28
  • 0
    @ClementC.This is not proof, but the obvious observations. Did you notice that the author is asked how one can see the dependence without any mathematical manipulation, but just looking at the joint density?2017-02-28
  • 0
    I did. And my point is that, without further elaboration, your answer is only marginally better than "of course they are not independent, it's obvious by looking at the pdf."2017-02-28
0

After getting some help suggesting the use of contour plots the correlation between $X$ and $Y$ became more clear. See below the original plot as in the OP (left) side-to-side with the color coded code (right):

enter image description here

which when seen from above does convey the idea of higher overall density in $Y$ as $X$ increases, although arguably not as obviously as on the case of a bivariate normal with correlation (left):

enter image description here