5
$\begingroup$

Let $X \sim {\cal N}_d(0, \sigma^2I_d)$. I am interested in bounding the tail probability $P[||X||_1 > t]$ from above. A pointer to a known exponential or polynomial tail bound would be appreciated.

One idea to deal with the above probability is to use something like McDiarmid's inequality, but suitable for unbounded random variables.

1 Answers 1

8

A typical trick is to use Chebyshev's inequality with the function $\exp(\alpha x)$ to obtain $P(\|X\|_1>t)\leq E(\exp(\alpha\|X\|_1))/\exp(\alpha t).$ Since the 1-norm is a sum and the coordinates are independent, the right hand side equals $\exp(-\alpha t)\ E(\exp(\alpha |Z|))^d=\exp(-\alpha t)\left(\exp(\alpha^2/2) (1+\mbox{erf}(\alpha/\sqrt{2})) \right)^d.$ Here $Z$ is a one dimensional standard normal random variable.

You can now try to optimize over $\alpha$. Substituting the cheap upper bound $\mbox{erf}(\alpha/\sqrt{2})\leq 1$, and then optimizing gives $\alpha=t/d$. Plugging this in we get

$ P(\|X\|_1>t)\leq 2^d \exp(-t^2/2d).$

I know you could do better, but maybe it will suffice for your purposes.

  • 0
    @ByronSchmuland hi, could please help me to address a similar question http://math.stackexchange.com/questions/1756400/what-is-the-concentration-upper-bound-for-frac-x-1-x-2-if-each-entr2016-04-25