3
$\begingroup$

for example there is a Binomial RV $S_n \sim \mathrm{Binomial}(n,\frac{1}{n})$. I'd like to use Markov inequality to find the tail probability:

$ P(S_n \geq \varepsilon) \leq \frac{\mathbf{E}S_n}{\varepsilon} $

PGF of the Binomial RV is

$ \mathbf{E}z^X=\sum_{j=0}^{n}\binom{n}{k}z^k \frac{1}{n^k}(1-\frac{1}{n})^{n-k}=\Bigg(1+\frac{z-1}{n}\Bigg)^n $

Therefore Markov inequality becomes $ P(S_n \geq \varepsilon)=P(z^{S_n} \geq z^{\varepsilon}) \leq \frac{\mathbf{E}z^{S_n}}{z^{\varepsilon}}=\frac{\Bigg(1+\frac{z-1}{n}\Bigg)^n}{z^{\varepsilon}} $

My question is, how to derive $z$? I know of course that $\mathbf{E}S_n=\frac{d}{dz}\mathbf{E}z^X|_{z=1}=1 $ (in this case), but I wonder if this tail probability can be derived in a different way.

EDIT: as long as I know, this has something to do with 'dominant singularities', but MSE/MO don't seem to have any relevant questions.

1 Answers 1

2

The question how to derive $z$? makes no sense since one does not derive $z$, rather one uses the upper bound $ u(z)=\frac1{z^{\varepsilon}}\left(1+\frac{z-1}n\right)^n, $ for every $z\geqslant1$. The next step is usually to optimize $u(z)$ over $z\geqslant1$ for every fixed $\varepsilon\gt1$, which is done as follows: the derivative is u'(z)=u(z)\cdot\left(\frac{n}{n+z-1}-\frac{\varepsilon}z\right)\propto nz-\varepsilon(n+z-1)=(n-\varepsilon)z-\varepsilon(n-1), hence one chooses $z_\varepsilon=\varepsilon(n-1)/(n-\varepsilon)$ if $\varepsilon\lt n$ (the case $\varepsilon\gt n$ being empty)...

At the end of the day, one should recover a particular case of Hoeffding's inequality, which implies that, for every random variable $X$ with binomial distribution of parameters $(n,p)$ and every positive $x$, $ \mathrm P(X \geqslant (1+x)np)\leqslant \mathrm e^{-2npx^2}. $

  • 0
    Here, $\propto$ means *has the same sign than* (since its sign is the only relevant feature of the derivative $u'$).2012-01-20