7
$\begingroup$

I don't really know how to start proving this question.

Let $\xi$ and $\eta$ be independent, identically distributed random variables with $E(|\xi|)$ finite.

Show that $E(\xi\mid\xi+\eta)=E(\eta\mid\xi+\eta)=\frac{\xi+\eta}{2}$

Does anyone here have any idea for starting this question?

  • 0
    See also an answer [here](https://math.stackexchange.com/questions/1842364/conditional-expectation-of-independent-variables/1842770#1842770).2017-08-09

4 Answers 4

15

There's a subtle point here, which bothered me the first time I saw this problem.

Henry's answer has the essential idea, which is to use symmetry. User Did's comment points out that the symmetry comes from the fact that $(\xi, \eta)$ and $(\eta, \xi)$ are identically distributed. But, straight from the definition of conditional expectation, it isn't clear that symmetry in the joint distributions is enough to get the result. I ended up having to prove the following lemma:

Lemma. Let $X,Y$ be random variables. There is a measurable function $f$ such that $E[X\mid Y] = f(Y)$ a.s. Moreover, if $(X', Y')$ is identically distributed to $(X,Y)$, then $E[X' \mid Y'] = f(Y')$ a.s. for the same function $f$.

Proof. The existence of $f$ is a consequence of the Doob-Dynkin Lemma. For the second part, we use the definition of conditional expectation. $f(Y')$ is clearly $\sigma(Y')$-measurable, so it remains to show that for any $A \in \sigma(Y')$, we have $E[1_A f(Y')] = E[1_A X']$. Since $A \in \sigma(Y')$, $A = (Y')^{-1}(B)$ for some Borel set $B$ (this fact is part of the proof of Doob-Dynkin). But since $(X',Y')$ has the same distribution as $(X,Y)$, we get $\begin{align*} E[1_A f(Y')] &= E[1_B(Y') f(Y')] \\ &= E[1_B(Y) f(Y)] \\ &= E[1_B(Y) E[X \mid Y]] \\ &= E[1_B(Y) X] && \text{since $1_B(Y)$ is $\sigma(Y)$-measurable}\\ &= E[1_B(Y') X'] \\ &= E[1_A X'] \end{align*}$ as desired.

It is worth noting that the function $f$ is generally not unique. In particular, we could modify $f$ almost arbitrarily on any set $C \subset \mathbb{R}$ such that $P(Y \in C)=0$.

Also, to address the point in kkk's comment: Just knowing that $\xi, \eta$ are identically distributed is not sufficient. Here is a counterexample. Let $\Omega = \{a,b,c\}$ have three outcomes, each with probability $1/3$ (and $\mathcal{F} = 2^\Omega$). Let $X(a) = 0$, $X(b)=1$, $X(c)=2$; and $Y(a)=1$, $Y(b)=2$, $Y(c)=0$. Thus $X$ is uniformly distributed on $\{0,1,2\}$, and $Y = X + 1 \bmod 2$, so $Y$ is also uniformly distributed on $\{0,1,2\}$.

Now we have $(X+Y)(a) = 1$, $(X+Y)(b)=3$, $(X+Y)(c)=2$. So $X+Y$ is a 1-1 function on $\Omega$ and thus $\sigma(X+Y) = \mathcal{F}$, so both $X,Y$ are $\sigma(X+Y)$-measurable. Thus $E[X\mid X+Y]=X$, $E[Y\mid X+Y]=Y$. However, $X$, $Y$, and $\frac{X+Y}{2}$ are all different.

  • 0
    It seems silly to think that identical distribution of $\xi$ and $\eta$ would be of consequence here. It is identical distribution of the _pairs $(\eta,\xi)$ and $(\xi,\eta)$ that matters. $\qquad$2018-02-10
5

$E(\xi\mid \xi+\eta)=E(\eta\mid \xi+\eta)$ since they are identically distributed. (Independent does not matter here.)

So $2E(\xi\mid \xi+\eta)=2E(\eta\mid \xi+\eta) = E(\xi\mid \xi+\eta)+E(\eta\mid \xi+\eta) =E(\xi+\eta\mid \xi+\eta) = \xi+\eta$ since the sum $\xi+\eta$ is fixed.

Now divide by two.

  • 0
    @DannyPak-KeungChan Believe it or not, my comment was not referring to your answer (whose necessity, by the way, nearly 6 years after Nate provided a full answer, I fail to be convinced about).2017-08-08
3

I state in full and prove in details.

Proposition: Let $(\Omega,\mathcal{F},P$) be a probability space. Let $X,Y$ be i.i.d. random variables with $E\left[|X|\right]<\infty$. Let $\mathcal{G}=\sigma(X+Y)$. Then $\operatorname{E} \left[X \mid \mathcal{G}\right] = \operatorname{E} \left[Y \mid \mathcal{G}\right]=\frac{1}{2}(X+Y)$.

Proof: Let $\mu_{XY}$ be the joint distribution measure on $\mathbb{R}^{2}$ induced by $(X,Y)$. That is, $\mu_{XY}(B)=P\left(\left\{ \omega \mid (X(\omega), Y(\omega)) \in B\right\} \right)$. Let $\mu_X$ and $\mu_Y$ be the distribution measures on $\mathbb{R}$ induced by $X$ and $Y$ respectively. Since $X$ and $Y$ are independent, we have $\mu_{XY}=\mu_X\times\mu_Y$. Moreover, since $X$ and $Y$ are identically distributed, $\mu_X=\mu_Y$. We denote $\mu=\mu_X=\mu_Y$.

Let $A\in\mathcal{G}$ be arbitrary. There exists a Borel set $B\subseteq\mathbb{R}$ such that $A=(X+Y)^{-1}(B)$. Hence $1_{A}(\omega)=1_{B}(X(\omega)+Y(\omega))$ for any $\omega\in\Omega$.

We have \begin{align} & \int_A \operatorname{E}\left[X\mid\mathcal{G}\right]\,dP = \int_A X\,dP=\int 1_B(X+Y)X \, dP = \int 1_B(x+y)x\,d\mu_{XY}(x,y) \\[10pt] = {} & \iint1_{B}(x+y)x\,d\mu_{X}(x) \, d\mu_Y(y) = \iint 1_B(x+y)x \, d\mu(x) \, d\mu(y). \end{align} By the same argument, $ \int_A \operatorname{E}\left[Y\mid\mathcal{G}\right]\,dP=\iint1_{B}(x+y)y \, d\mu(x) \, d\mu(y). $ Now it is clear that $ \int_A \operatorname{E}\left[X\mid\mathcal{G}\right]\,dP=\int_A \operatorname{E} \left[Y\mid\mathcal{G}\right] \,dP $ and hence $\operatorname{E} \left[X \mid \mathcal{G}\right] = \operatorname{E}\left[Y \mid \mathcal{G}\right]$. Lastly, $\operatorname{E}\left[X+Y\mid\mathcal{G}\right]=X+Y$. It follows that $\operatorname{E}\left[X\mid\mathcal{G}\right]=\operatorname{E} \left[Y \mid \mathcal{G} \right]=\frac 1 2 (X+Y)$.

  • 2
    @CWL: It is from the following fact: For any Borel function $f:\mathbb{R}^2\rightarrow \mathbb{R}$ that is $\mu_{X,Y}$-integrable, we have $\int f(X,Y)\,dP = \int f(x,y) d\mu_{X,Y}(x,y)$. This fact can be proved in the standard way: Firstly it is true for all indicator functions of Borel subsets of $\mathbb{R}^2$. Then, by linearity, it is true for all simple functions. Then, by Monotone Convergence Theorem, it is true for all non-negative Borel functions. Lastly, by decomposing into positive part and negative part...2017-11-13
0

The assumption of independence can be weakened. You say $\zeta,\eta$ are i.i.d. A consequence is:

$ \text{ The pairs } (\zeta,\eta) \text{ and }(\eta,\zeta) \text{ both have the same distribution.} \tag 1 $

Therefore $\operatorname E(\zeta\mid \zeta+\eta) = \operatorname E(\eta\mid \zeta+\eta).$ Next, observe that $\operatorname E(\zeta\mid\zeta+\eta) + \operatorname E(\eta\mid\zeta+\eta) = \operatorname E(\zeta+\eta \mid \zeta+\eta) = \zeta+\eta.$

Thus we have $ \operatorname E(\zeta\mid\zeta+\eta) = \operatorname E(\eta\mid \zeta+\eta) = \frac{\zeta+\eta} 2. $ The statement $(1)$ above is weaker than "i.i.d." but stronger than identical distribution.