0
$\begingroup$

Here $G$ is a sub-$\sigma$-field. The result looks trivial. But I am struggling to get a formal proof. Following is my attempt:

By definition, we have $\int_A E[X|G]dP=\int_A X dP=\int_A Y dP=\int_A E[Y|G]dP$ for any $A \in G$
How do I conclude $E[X|G]=E[Y|G]$ a.s. from $\int_A E[X|G]-E[Y|G] dP =0$?

(I am thinking making use of $\int f dP=0 \& f\ge 0 \implies f=0$ a.s.)

  • 3
    I think you're misunderstanding the definition. Recall that $\mathbb{E}[X|\mathcal{G}] $ is defined as the a.s. unique $\mathcal{G}$-measurable function $f$ s.t. $\forall A \in \mathcal{G}$, $\int_A f = \int_A X$. But, since $X=Y$ a.s., for every measurable $A$, $\int_A X = \int_A Y$. Thus, if $f = \mathbb{E}[X|\mathcal{G}]$, $\int_A f = \int_A Y$ for every $\mathcal{G}$-measurable $A$, and thus $\mathbb{E}[Y|\mathcal{G}] =f = \mathbb{E}[X|\mathcal{G}]$.2017-01-11
  • 0
    @stochasticboy321 It depends, I have already encountered definitions where the unicity is proved after we prove that there exists a version of the conditional expectation. But indeed, if one writes $E[X\mid G]$, it is meaningless if unicity has not been proved yet.2017-01-11
  • 0
    @MoebiusCorzer So, I like this definition since absolute continuity of the RV on the restricted sigma algebra wrt the original one implicitly makes sense as a model, and then it's `just' Radon-Nikodym till the cows come home. Further, I had thought that this was indeed the standard def., although that might be wrong. Now I'm curious about the alternate definitions, do you have a reference I can skim?2017-01-11
  • 0
    @stochasticboy321 The first version I learned was also involving Radon-Nikodym, but one can avoid that heavy artillery by using Hilbert space structure. The reference I have are lecture notes and these are unfortunately not available online. However, I am quite sure that it is done in the latter way in one of the following references: Dudley "Real Analysis and Proba" or Billingsley "Probability and measure" or Durrett "Probability: theory and examples".2017-01-11
  • 0
    @MoebiusCorzer Ach, I think I finally have to read some of my probability texts. I've been getting by on an old measure theory class and hot air :P. Thanks!2017-01-11
  • 0
    @stochasticboy321 Actually, I've just checked Durrett and Dudley and they both use Radon-Nikodym. The outline of the proof with Hilbert spaces is as follows: first take a square integrable rv $X\in\mathcal{L}^{2}(\Omega,\mathcal{F},P)$ and $\mathcal{G}$ a subalgebra. Then, you take the subspace $V$ of equivalent classes of square integrable $\mathcal{G}$-meas rv. It is easy to see that it is a vector space. You show it is closed, then use the orthogonal decomposition theorem and the projection of the equivalent class of $X$ onto $V$ is the conditional expectation. Then you extend to $L^1$ rv's2017-01-11

2 Answers 2

0

What you need is the following theorem:

Consider the probability space $(\Omega,\mathcal{G},P)$ and let $f,g\in L^1$. If $\int_Af=\int_Ag$ for all $A\in \mathcal{G}$, then $f=g$ almost surely.

Proof.

Suppose $\int_Af\leq\int_Ag$ for all $A\in\mathcal{G}$. Considering $A=[g

  • 0
    Do you not only need this for the converse of the OP's question?2017-01-11
0

Let $(\Omega,\mathcal{F},\mathbb{P})$ be our probability space and let $\mathcal{G}\subset\mathcal{F}$ be a subalgebra. I consider only integrable r.v.'s for otherwise the conditional expectation is not defined. We have (see below):

$$E[XI_{G}]=E[YI_{G}]$$

for all $G\in\mathcal{G}$ (actually, it holds for any $G\in\mathcal{F}$).

Now, by definition of the conditional expectation, there exists a version $Z$ of $E[X\mid\mathcal{G}]$ and a version $Z'$ of $E[Y\mid\mathcal{G}]$ such that:

$$E[XI_{G}]=E[ZI_{G}]=E[YI_{G}]=E[Z'I_{G}]$$

for all $G\in\mathcal{G}$. This implies that $Z=Z'$ a.s. (in the sense that there exists a $\mathcal{G}$-measurable function $N$ such that $N=0$ a.s. and $Z=Z'+N$).

Note that if $V,W$ are two $\mathcal{A}$-measurable *integrable fonctions, we have $V=W$ a.s. if and only if $E[VI_{A}]=E[WI_{A}]$ for every $A$ in $\mathcal{A}$. One direction is easy. To see the "if" part, consider $A=\{V-W>0\}$ and $A=\{V-W<0\}$.