In proposition 1.1 of "Ten Lectures on Random Media" by Bolthausen and Sznitman, the authors conclude that for a random walk in a random environment in 1 dimension, the environment viewed from the particle process is Markovian, with respect to both the quenched measure and the annealed measure. This is standard stuff, but I don't understand their method.
My question, generalized as much as possible, is the following:
Suppose we have a sequence of random variables $\omega_0, \omega_1, \ldots$, with state space $\Omega$.
Why if for any $f_1,\ldots, f_{n+1}$ bounded measurable functions on $\Omega$, we have $ \mathbb{E}[f_{n+1}(w_{n+1})f_n(\omega_n)\cdots f_0(\omega_0)] = \mathbb{E}[Rf_{n+1}(\omega_n)f_n(\omega_n)\cdots f_0(\omega_0)], $ where $R$ is some transition kernel of the process (operator on bounded measurable functions), then the chain is Markovian?
Again, I'm sure this is basic conditional expectation stuff, but I'm having trouble seeing the connection.
P.S. In case you're curious, the generator $R$ is given by $R = \sum_{|e|=1} p(0,e,\omega) f \circ t_e$, where $p(0,e,\omega)$ is the probability of the random walk at $0$ jumping along edge $e$ in the random environment $\omega \in \Omega$, and $t_e$ is the shift in $\mathbb{Z}^d$ bringing $e$ to $0$. So the above chain $\omega_n$ is the environment, translated by the position of the random walk $X_n$ in the environment $\omega$, i.e., $\omega_n := t_{X_n}\omega$. And the expectation written above is actually the quenched expectation, ${E}_{0,\omega}$. But Markovness follows for the annealed process from the quenched by taking expectation with respect to the environment.