I have a sequence of random variables and want to work out the details of a proof. The author is not so accurate, and he uses conditional expectation values $E( Y | X_k)$ where $X_k$ is the current value of an iterative process, $X_{k+1}=X_k + Z_k$, where $Z_k$ are a random variables introducing chance (you can consider a determinstic start point $X_0$).
From what I see there are two possible ways to understand $E(\cdot | X_k)$:
The conditional expectation could be read as $E( \cdot | \{\omega \in \Omega | X_k(\omega)=x_k\})$ where he does not make a difference between $X_k$ (the random variable) and $x_k$ (it's value).
The conditional expectation is defined as follows $E(Y|X_k)=E(Y|\sigma(X_k))$ with $\sigma(X_k)=\sigma(\{{X_k}^{-1}(A) | A \in \mathcal{E}\})$ (the smallest $\sigma$-algebra that contains these sets).
At first, I thought I could use the first option, which seemed natural to me, but then I found the definition that supports the second option.
Is 2) in the end the same as 1)? Is 1) wrong?
(The random variables are $Y, X_k: \Omega \rightarrow \mathbb{R}^n$ with Borel-$\sigma$-algebras $\mathcal{E}^n$).
ADDED: How should one view/interpret $\sigma(\{{X_k}^{-1}(A) | A \in \mathcal{E}\})$?