2
$\begingroup$

This argument appeared in a proof in a paper (I am paraphrasing), and I am not sure why it is true. It should be rather simple.

Let $(\Omega,P)$ be some sample space. Let $X$ be a random variable between $0$ and $1$. Let $U \subseteq \Omega$ and let

$U^\prime = ${$ \omega \colon X(\omega) \ge E[X \mid U] - 1/m$}.

Then $P(U^\prime) \ge P(U)/m$.

Can't see exactly why this is true.

  • 0
    Note that my answer given below shows exactly where we use the fact that $X \leq 1$.2010-12-07

3 Answers 3

2

We can assume that $U = \Omega$. So you want to show $\Pr[X \geq E[X] - \alpha] \geq \alpha,$ where $\alpha = 1/m$. If not, then $E[X] < \alpha + (1-\alpha)(E[X] - \alpha) = (1-\alpha) E[X] + \alpha^2,$ so that $\alpha E[X] < \alpha^2$ and $E[X] < \alpha$. But in that case, the inequality trivially holds.

EDIT: Some more details. Suppose that $\Pr[X > E[X] - \alpha] = p < \alpha$. Given $p$, the distribution maximizing $E[X]$ under this constraint is $\Pr[X=E[X]-\alpha]=1-p,\quad \Pr[X=1]=p.$ Its expectation is $p + (1-p)(E[X] - \alpha)= E[X] - \alpha + p(1 - E[X] + \alpha).$ Since $p < \alpha$ and $1 - E[X] + \alpha > 0$, we get that $E[X] < E[X] - \alpha + \alpha(1 - E[X] + \alpha).$ Rearranging, $\alpha (1 + E[X]) < \alpha (1 + \alpha)$ and so $E[X] < \alpha$, in which case the inequality trivially holds.

  • 0
    never mind, you used it for 1 - E[X] + \alpha > 0.2010-12-03
1

We want to show that $ {\rm P}[X \ge {\rm E}(X|U) - 1/m] \ge {\rm P}(U)/m. $ This is trivially not true if $0 < m < 1$, for if $U = \Omega$, the right-hand side is greater than $1$. So, assume that $m \geq 1$, and ${\rm P}(U)>0$. Further, if ${\rm E}(X|U) - 1/m \leq 0$, then the inequality is trivial; hence we assume that ${\rm E}(X|U) - 1/m > 0$. Let $P_{X|U}$ denote the conditional distribution of $X$ given $U$, so that $ P_{X|U} {(A)} = {\rm P}(X \in A | U) = \frac{{{\rm P}([X \in A],U)}}{{{\rm P}(U)}}. $ Then, $ {\rm E}(X|U) = \int_{[0,1]} {xP_{X|U} ({\rm d}x)}. $ Next, decompose $[0,1]$ into the union of the intervals $I_1 = \lbrace 0 \leq x < {\rm E}(X|U) - 1/m \rbrace$ and $I_2 = \lbrace {\rm E}(X|U) - 1/m \leq x \leq 1 \rbrace$. On the one hand, $ \int_{I_1 } {xP_{X|U} ({\rm d}x)} \le [{\rm E}(X|U) - 1/m]P_{X|U} (I_1 ) \le {\rm E}(X|U) - 1/m, $ and on the other hand, $ \int_{I_2 } {xP_{X|U} ({\rm d}x)} \leq \int_{I_2 } {P_{X|U} ({\rm d}x)} = P_{X|U} {(I_2)} \leq \frac{{{\rm P}(X \in I_2 )}}{{{\rm P}(U)}}. $ Combining it all, we get $ {\rm E}(X|U) \leq {\rm E}(X|U) - 1/m + {\rm P}(X \in I_2)/{\rm P}(U). $ That is ${\rm P}(X \in I_2) \geq {\rm P}(U)/m$, which is exactly what we want.

0

looks like chebyshev inequality.

  • 2
    You haven't presented an argument. I was sketching the simplistic application of Chebyshev, which doesn't seem to work.2010-12-03