2
$\begingroup$

The convolution formula goes as follows:

If the random variables $X$ and $Y$ are independent and continuous with density function $f_X$ and $f_Y$, then the density function of $Z=X+Y$ is $$f_Z(z)=\int_{\infty}^\infty f_X(x)f_Y(z-x)\,\mathrm{d}x\quad \text{for } z\in\mathbb R.$$

However, the theorem does not explicitly state that those two random variables have to be jointly continuous. Why not? My textbook assumes joint continuity to show,

$$f_Z(z)=\int_{\infty}^\infty f_{X,Y}(x,z-x) \, \mathrm{d}x,$$

which almost immediately gives us the convolution formula for independent random variables that are continuous.

So my question is: why is joint continuity not assumed in the theorem?

Later on my textbook gives the following example:

Let $X$ and $Y$ be independent random variables having, respectively, the gamma distribution with parameters $s$ and $\lambda$, and the gamma distribution with parameters $t$ and $\lambda$.

They proceed to apply the convolution formula, yet they nowhere state that those two random variables are jointly continuous...

So is it a mistake, or am I missing something here?

  • 0
    $\displaystyle \int f(x)\,\mathrm{d}x$ looks typographically different from $\displaystyle \int f(x)\,\mathrm{dx},$ and the second one is wrong because the $x$ should be consistently italicized. I edited the question accordingly. Another issue is that \operatorname in many contexts adds a bit of space to is left and right (but not when parentheses follow it, etc.) and the space should be only to its left, and for that reason I used \mathrm instead.2017-01-25
  • 0
    Is it the best solution, though? I've tried out several things, and nothing seems to work perfectly. With \mathrm you have to adjust the spacing manually too.2017-01-25
  • 0
    The manual adjustment to the spacing in $\displaystyle \int f(x)\,\mathrm{d} x$ is the small space between $f(x)$ and $\mathrm{d}x.$ That space should be there, and I don't know of any standard way of doing this that doesn't just add it manually. The same thing applies to $f(x_i)\,\Delta x$ and to the denominator in $\displaystyle \frac{\partial^2 f}{\partial y\,\partial x}.\qquad$2017-01-25

2 Answers 2

2

Joint continuity is not included as a hypothesis when independence is assumed. The reason is that if each of two random variables has a continuous distribution and they are independent, then their joint distribution is continuous.

Here, by "continuous", I mean not just that the c.d.f. is continuous, but that there is a probabilty density function (a somewhat stronger condition). That means for every measurable set $A$ you have $$ \Pr(X\in A) = \int_A f_X(x)\,dx $$ and similarly for $Y$. Independence implies \begin{align} & \Pr(X\in A\ \&\ Y\in B) = \Pr(X\in A)\Pr(Y\in B) = \int_A f_X(x)\,dx \int_B f_Y(y)\,dy \\ = {} & \Big( \text{something not depending on } y \Big)\times \int_B f_Y(y)\,dy = c\int_B f_Y(y)\,dy \\[10pt] = {} & \int_B c f_Y(y)\,dy = \int_B\left( \int_A f_X(x)\,dx \right) f_Y(y)\,dy \\[10pt] = {} & \int_B\left( \int_A f_X(x)\,dx \right) \Big( \text{something not depending on } x \Big) \,dy \\[10pt] = {} & \int_B \left( \int_A f_X(x) f_Y(y)\,dx \right) \,dy \\[10pt] = {} & \iint_{A\times B} f_X(x) f_Y(y) \,d(x,y) \quad \text{by Fubini's theorem or Tonelli's theorem.} \end{align} This works for sets of the form $A\times B$, i.e. $(x,y)$ is in that set if and only if $x\in A$ and $y\in B$. Now there's the problem of more general sets, for example $(x,y)\in C$ where $C$ is a disk in the $xy$-plane. Can one prove that $$ \Pr(X\in C) = \iint_C f_X(x) f_Y(y)\, d(x,y) \text{ ?} $$ This involves some theory of integration beyond what will fit in the tiny margin of this page (o.k. -- I mean more than I'm going to write here). But once one shows this, one conludes that $(x,y)\mapsto f_X(x) f_Y(y)$ is the joint density.

  • 0
    I post the beginning of my proof as an answer, because something goes wrong in the comments.2017-01-25
2

This is my proof then:

\begin{aligned} F_{X,Y}(x,y)=^1&F_X(x)F_Y(y)=^2\int_{-\infty}^x f_X(u)\,\mathrm {d}u \int_{-\infty}^y f_Y(v)\,\mathrm {d}v\\ =^3&\int_{-\infty}^x \int_{-\infty}^yf_X(u)f_Y(v)\,\mathrm {d}(u,v)=^4\int_{-\infty}^x \int_{-\infty}^yf_{X,Y}(u,v)\,\mathrm {d}u\,\mathrm {d}v. \end{aligned}

I still need to get $F_{X,Y}(x,y)=\int_{-\infty}^x \int_{-\infty}^yf(u,v)\,\mathrm {d}u\,\mathrm {d}v$, for some function $f:\mathbb R^2\to[0,\infty)$.

$^1$Follows from independence.

$^2$ Follows from continuity.

$^3$ Follows from Michael Hardy's argument (I think, at least that's how I see it), where $A=\{X\leq x\}, B=\{Y\leq y\})$.

$^4$Follows from the definition of joint continuity. If a function $f:\mathbb R^2\to[0,\infty)$ satisfies $F_{X,Y}(x,y)=\int_{-\infty}^x \int_{-\infty}^yf(u,v)\,\mathrm {d}u\,\mathrm {d}v$, then it is called the joint distribution function, $f_{X,Y}$, and thereby the random variables are jointly continuous. In this case, $f_{X,Y}(x,y)=f_X(x)f_Y(y)$, as followed from step 3.

I think we also don't need to worry about the domain, because both $f_X$ and $f_Y$ are defined on R by definition.

  • 0
    Your item $(4)$ does not follow from joint continuity, and it wouldn't make sense to include anything that does, because joint continuity was not assumed. The question was _why_ joint continuity was not assumed. Rather, $(4)$ follows from the fact that the integral before $(4)$ is equal to the value of the joint c.d.f., as you've just shown, provided that the joint c.d.f. is enough to determine the probabilities of things like disks (mentioned in my answer).2017-01-25
  • 0
    My wording wasn't very great; I meant that it follows from the definition of joint continuity that $X$ and $Y$ are jointly continuous, because we can simply set $f(x,y)=f_X(x)f_Y(y)$. The problem lied in the fact that I didn't realize that $f_X(x)f_Y(y)$ already is the function f(x,y) required by for joint continuity. Anyhow, thanks a lot for your help! I get it now.2017-01-25