1
$\begingroup$

(In the following I'm only talking about discrete space, variables etc.)

The definition of independence is such that when two events are dependent we don't exactly know, how the dependence is.

Let me specify this: Suppose we have two dependent random variables $X,Y$ on or space $\Omega$; that means, that there exists $A,B\subseteq\mathbb{R}$ such that the events $X^{-1}(A)\subseteq\Omega$ and $Y^{-1}(B)\subseteq\Omega$ are dependent. But how does this translate to what we can say about the distribution of, say, $Y$, if we know the distribution of $X$ ? What is the connection between their distributions ?

I'm asking this since here, right at the beginning, it says that "two random variables are independent if they convey no information about each other and, as a consequence, receiving information about one of the two does not change our assessment of the probability distribution of the other", so if the variables are dependent there would have to be a connection between their distributions.

  • 0
    http://en.wikipedia.org/wiki/Bayes'_theorem2012-12-29
  • 0
    @Learner Yes $ $2012-12-29
  • 0
    @QiaochuYuan Well I thought a little bit about it, but that doesn't yet give me what I want for random variables...I found this (http://en.wikipedia.org/wiki/Posterior_probability#Calculation) which seems to be what you are insinuating, but that is rather complicated...2012-12-29
  • 0
    "...that means, that for every..." should rather be "... that means that for some..."2012-12-29
  • 0
    @leonbloy I think"for some" is not a mathematical term. "For every" should be ok, see for instance http://www.statlect.com/inddst1.htm2012-12-29
  • 1
    But it's false that the "dependence" must hold for every subset pair. The _independence_ propery must hold for _every_ subset pair; if it does not hold for _some_ $A,B$, then they are dependent.2012-12-29
  • 0
    ah, yes sorry, I forgot I talked about dependence rather than independence.2012-12-29

1 Answers 1

1

Conditional probability tells how a variable depends (pointwise) on another one. Assuming discrete distributions: $$P(Y=y | X=x) = \frac{P(X=x \cap Y=y)}{P(X=x)}$$

If they are indendependent

$$P(Y=y | X=x) = \frac{P(X=x) P(Y=y)}{P(X=x)}=P(Y=y)$$

Elsewhere, if they are not independent, knowing the value of $X$ changes the distribution of $Y$ (at least for some values of $X$ and $Y$).

what we can say about the distribution of, say, $Y$, if we know the distribution of $X$

Nothing. If we know the value of $X$ , and assuming that we know the full joint distribution of $X,Y$ we can know the conditioned distribution of $Y$, $P(Y|X)$.

  • 0
    I pondered a while on your answer but could not quite follow you on the last sentence. Could you please tell me/sketch a proof why "If we know the value of X , and assuming that we know the full joint distribution of X,Y we can know the conditioned distrubution of Y, P(Y|X)." holds ?2012-12-29
  • 0
    @user26698 : It's just the first equation (I rewrote it with the letter swapped, so that it fits better the statement).2012-12-29