2
$\begingroup$

Let us consider the state space being $0,1,\dots,M$ for some $M\in \mathbb N$ and put there $N$ walkers: $ X = (X_1,\dots,X_N). $ Each of the walkers move independently, they can be in the same points of the state space at the same time and they reflect from the boundaries. More precisely, $ X_i(n+1) = \begin{cases} 1,&\text{ if }X_1(n) \leq 0 \\ M-1,&\text{ if }X_i(n) \geq M \\ X_i(n)+\xi_i(n),&\text{ otherwise.} \end{cases} $

Here $\xi_i(n)$ takes value $-1$ and $1$ with probability $\frac12$ and are all independent. I was asked to help to write an algorithm for Monte-Carlo simulation for this problem, which is quite an easy task and didn't take much time.

On the other hand I realized that these people are trying to simulate quantities which can be found analytically in a more handy way: namely as expectations with respect to the invariant distribution. I think that will make their life much easier, especially if the explicit formula is available.


The problem is that the correspondent Markov Chain has $(M+1)^N$ states and is irreducible: e.g. if $N=M=2$ and all the walkers start at $0$ then it is not possible that in some moment of time one of them will be at $0$ and one of them at $1$ because the sum of their states if always even.

Still the problem has a very simple description so I have a hope that a analytic characterization of irreducible classes as well as invariant distribution for each of them are already known.


I also wonder if for the case $\xi$ taking values $-1,0,1$ with probability $\frac13$ (which leads to the irreducible Markov Chain with $(M+1)^N$ states) the invariant distribution is known.

  • 0
    @Henry: thanks for fixing2011-12-23

1 Answers 1

1

For one walker in your second case, it is fairly clear that $p(x)=\frac{1}{M}$ for $0 \lt x \lt M$ and $p(0)=p(M)=\frac{1}{2M}$ is invariant, since you typically have

  • $p(x)=\frac{1}{3}p(x-1)+\frac{1}{3}p(x)+\frac{1}{3}p(x+1)$

but special cases such as

  • $p(1)=\frac{2}{3}p(0)+\frac{1}{3}p(1)+\frac{1}{3}p(2)$ and
  • $p(0)=\frac{1}{3}p(0)+\frac{1}{3}p(1)$

and similarly at the other end. In a handwaving sense this distribution might also be thought to be invariant for your first case, though with the problems you point out.

So in general for your $N$ walkers and $(M+1)^N$ states the invariant distribution has a probability of each state of $\dfrac{1}{2^L M^N}$ where $L$ is the number of walkers at the extremes of $0$ and $M$.

  • 0
    The general $\frac{1}{2^L M^N}$ is invariant both for the $-1,1$ and the $-1,0,1$ cases (in the latter where there is a $\frac{1}{3}$ probability of staying at the edge). One way to see this makes sense is to imagine the walkers are on an $M$ point circle identifying positions $0$ and $M$ as a single point.2011-12-24