5
$\begingroup$

Let $\mu_i,\nu_i$ be probability measure on a finite space $\Omega_i,i=1,2,\dots,n$. Define $\mu=\prod\limits_{i=1}^{n}\mu_i$ and $\nu=\prod\limits_{i=1}^{n}\nu_i$ on $\Omega=\prod\limits_{i=1}^{n}\Omega_i$, show that  $$\|\mu-\nu\| \le \sum\limits_{i=1}^{n}\|\mu_i-\nu_i\|$$ where $\|\mu-\nu\|$ denote the total variation distance between $\mu$ and $\nu$.  

I know how to do this using coupling, is there a way to do it without coupling?

I try to write $$\|\mu-\nu\|={1 \over 2}\sum\limits_{x=(x_1,x_2,\dots,x_n) \in \Omega}|\prod\limits_{i=1}^{n}\mu_i(x_i)-\prod\limits_{i=1}^{n}\nu_i(x_i)|$$

and use the fact that $\prod\limits_{i=1}^{n}\mu_i \le \sum\limits_{i=1}^{n}\mu_i$ and $\prod\limits_{i=1}^{n}\nu_i \le \sum\limits_{i=1}^{n}\nu_i$, but I didn't succeed.

2 Answers 2

6

This is a direct consequence of the fact that for every nonnegative $a_i$ and $b_i$, $$|(a_1\cdots a_n)-(b_1\cdots b_n)|\leqslant\sum\limits_{i=1}^n|a_i-b_i|\,(a_1\cdots a_{i-1})(b_{i+1}\cdots b_n). $$ Hence, $$ 2\|\mu-\nu\|=\sum\limits_x\left|\mu_1(x_1)\cdots\mu_n(x_n)-\nu_1(x_1)\cdots\nu_n(x_n)\right|\leqslant\sum\limits_{i=1}^n\Delta_i, $$ with $$ \Delta_i=\sum\limits_{x_i}|\mu_i(x_i)-\nu_i(x_i)|\,\sum\limits_{\widehat x_i}\mu_1(x_1)\cdots\mu_{i-1}(x_{i-1})\nu_{i+1}(x_{i+1})\cdots\nu_n(x_n), $$ where $\widehat x_i=(x_1,\ldots,x_{i-1},x_{i+1},\ldots,x_n)$. Each sum over $\widehat x_i$ is a product of masses of probability measures hence $$ \Delta_i=\sum\limits_{x_i}|\mu_i(x_i)-\nu_i(x_i)|=2\|\mu_i-\nu_i\|, $$ and you are done.

Edit The first inequality is a consequence of the triangular inequality between the numbers $(c_i)_{0\leqslant i\leqslant n}$ defined by $c_i=(a_1\cdots a_{i})(b_{i+1}\cdots b_n)$ for $1\leqslant i\leqslant n-1$, $c_0=b_1\cdots b_n$ and $c_n=a_1\cdots a_n$ since, for every $1\leqslant i\leqslant n$, $$ c_{i}-c_{i-1}=(a_{i}-b_{i})(a_1\cdots a_{i-1})(b_{i+1}\cdots b_n). $$

  • 0
    how can I proove the first inequality? By expanding a few case, say $n=2,n=3$, I see that it holds, but I can't prove it formally.2011-10-13
  • 0
    See edit. $ $ $ $2011-10-13
  • 0
    I see, clever. Thanks for the clarification.2011-10-14
5

First, notice that the difference of two probability measures is a signed measure of total variance at most $2$.

Hence your problem reduces to showing that $\|\mu\|\leq\sum\limits_{i=1}^{n}\|\mu_i\|$ for signed measures $\mu_i$ on finite spaces $\Omega_i$ with $\|\mu_i\|\leq 2$.

I will show you how to do the case $n=2$.

Then we have to show that $$ \sum\limits_{x\in\Omega_1,y\in\Omega_2} \lvert\mu_1(x)\mu_2(y)\rvert\leq \sum\limits_{x\in\Omega_1} \lvert\mu_1(x)\rvert +\sum\limits_{y\in\Omega_2} \lvert\mu_2(y)\rvert. $$ But this inequality is equivalent to $$\|\mu_1\|\cdot\|\mu_2\|\leq\|\mu_1\|+\|\mu_2\|,$$ which is easily verified using $\|\mu_i\|\leq 2$:

Edit: The last inequality can be seen as follows: $$ \|\mu_1\|\cdot\|\mu_2\|\leq\|\mu_1\|^2/2+\|\mu_2\|^2/2\leq\|\mu_1\|+\|\mu_2\|. $$

  • 0
    why does the last inequality holds? I don't see how $\|\mu_i\| \le 2$ helps to prove it.2011-10-13
  • 0
    Please see my edit. It is a general fact that $ab\leq\frac{a^2+b^2}{2}$. This follows from the binomial formula.2011-10-13
  • 0
    Thanks for the clarification.2011-10-14