1
$\begingroup$

These two well-known Chernoff bounds for the sum of RVs $X=\sum_{k=1}^{n}X_k$ in mulitplicative form,

$\mathbf{P}(X \leq (1- \delta)\mathbf{E}X) \leq e^{-\frac{\delta^2 \mathbf{E}X}{2}}\\ \mathbf{P}(X \geq (1+ \delta)\mathbf{E}X) \leq \big(\frac{e^{\delta}}{(1+\delta)^{(1+\delta)}} \big)^{\mu} $

apply in the case when $X_k$ take values 0 and 1. I'm wondering if and how these results can extend in a more general case, e.g. when $X_k$ are exponentially distributed iid.

  • 0
    If $ \mathbf{E} X_k = \frac{1}{\mu}$, then $\mathbf{E}X= \frac{n}{\mu}$ and MGF of X is $\varphi_{X}(t) = \bigg(\frac{\mu}{\mu-t} \bigg)^{n}$, so te upper bound using Markov inequality is \mathbf{P}(X>(1+\delta)\mathbf{E}X) < \bigg(\frac{\mu}{\mu-t} \bigg)^{n}e^{-t(1+\delta)\mathbf{E}X}, but I do not understand the whole optimization part.2012-09-14

1 Answers 1

1

As explained by the OP in a comment, Markov inequality yields $ P(X\gt(1+\delta)n/\mu)\leqslant \exp(-nI(t/\mu,\delta)), $ for every $t\gt0$, where $ I(s,\delta)=s(1+\delta)+\log(1-s). $ The derivative is $\partial_sI(s,\delta)=1+\delta-1/(1-s)$ hence the optimal choice is $s_\delta=\delta/(1+\delta)$, which yields $I(s_\delta,\delta)=\delta-\log(1+\delta)$, and finally, $ P(X\gt(1+\delta)n/\mu)\leqslant\left(\frac{1+\delta}{\mathrm e^\delta}\right)^n. $ To sum up, Markov (exponential) inequality yields a family of upper bounds, indexed by $t$ (or by $s$), and one chooses the value of $s$ (or of $t$) which minimizes this upper bound.

  • 0
    That's nice. $ $2012-09-16