I am looking at a discrete-time random walk on $(\mathbb{Z}^{+})^n$, where $\mathbb{Z}^{+}:=\{0,1,2,\dots\}$ and $n\in\mathbb{N}$. At any time, the random walk chooses a uniformly random co-ordinate and then either walks up (by 1) w.p. $p<\frac{1}{2}$ or tries to walk down (by 1) w.p. $1−p$, so nothing happens if he is already at the boundary. I would like to show that this random walk has exponential ergodicity, that is, $P(\text{not returned to $(0,...,0)$ by time }t)\leq e^{-\eta t}$for some constant $\eta$. Any hints on how I might do this?
Exponential ergodicity of biased random walk confined to the positive integer quadrant
-
0Hi Didier, I don't quite yet. I'm attempting Mike's suggestion, and will post some thoughts after a good long try at this.. – 2012-04-25
2 Answers
The $\mathbf{1}$-Dimensional Problem
According to the result here, the number of unilateral walks of type-$k$ (the number of strings of $k\;$ "$+1$"s and $k\;$ "$-1$"s whose partial sums are never negative) is $ \frac{1}{k+1}\binom{2k}{k}\tag{1} $ In the same article, it is shown that $ \frac{1-\sqrt{1-4x}}{2x}=\sum_{k=0}^\infty\frac{1}{k+1}\binom{2k}{k}x^k\tag{2} $ A walk of length $2k+2$ would be a "$+1$" followed by a walk of type $k$ and finished with a "$-1$" resulting in a probability of $ \frac{1}{k+1}\binom{2k}{k}p^{k+1}(1-p)^{k+1}\tag{3} $ Plugging $(3)$ into $(2)$ yields a total probability of $p$, which is exactly as expected since there is only a probability $p$ of an initial "$+1$".
To compute the probability of a walk of length at least $2m+2$, sum $(3)$ for $k\ge m$. For this, we will use Stirling's Approximation to get $ \begin{align} \binom{2k}{k} &=\frac{4^k}{\sqrt{\pi k}}\left(1-\frac{1}{8k}+O\left(\frac{1}{k^2}\right)\right)\\ &\le\frac{4^k}{\sqrt{\pi k}}\qquad\text{for }k\ge1\tag{4} \end{align} $ Plugging $(4)$ into $(3)$ and summing for $k\ge m$ yields that the probability of a walk of length at least $2m+2$ is $ \begin{align} \sum_{k=m}^\infty\frac{1}{k+1}\binom{2k}{k}p^{k+1}(1-p)^{k+1} &\le\sum_{k=m}^\infty\frac{1}{k+1}\frac{4^k}{\sqrt{\pi k}}p^{k+1}(1-p)^{k+1}\\ &\le\frac{p(1-p)}{m\sqrt{\pi m}}\sum_{k=m}^\infty4^kp^{k}(1-p)^{k}\\ &=\frac{p(1-p)m^{-3/2}}{\sqrt{\pi}(1-2p)^2}(4p(1-p))^m\tag{5} \end{align} $ Bound $(5)$ is in the form desired when $p<\frac12$.
Letting $t=2m+2$ in $(5)$, we get that the probability of a walk of length at least $t\ge4$ is no greater than $ \frac{1}{4\sqrt{\pi}(1-2p)^2}(4p(1-p))^{t/2}=\frac{1}{4\sqrt{\pi}(1-2p)^2}e^{-\eta\,t}\tag{6} $ where $\eta=-\frac12\log(4p(1-p))>0$ when $0 . As has been pointed out, I need to rework the $n$-dimensional extension.
-
2On your random walk? :-) – 2012-04-25
Caveat: This is a partial answer, restricted to the regime $p\lt1/(n+1)$.
For every $x$ in $(\mathbb Z^+)^n$, let $|x|$ denote the sum of the coordinates of $x$ and $z(x)$ the proportion of coordinates $i$ such that $x_i=0$. Let $(X_t)_t$ denote the Markov chain in $(\mathbb Z^+)^n$, starting from $X_0\ne0$, and $T$ the first hitting time of $0$. For every $a\ne0$, $ \mathrm E(a^{|X_{t+1}|}\mid\mathcal F^X_t)=a^{|X_t|}\cdot n^{-1}\sum_{i=1}^n\left(pa+(1-p)a^{-1}[X_t^{(i)}\ne0]+(1-p)[X_t^{(i)}=0]\right), $ thus, $\mathrm E(a^{|X_{t+1}|}\mid\mathcal F^X_t)=a^{|X_t|}b(X_t)$, where $ b(x)=pa+(1-p)a^{-1}(1-z(x))+(1-p)z(x). $ Assume that $a\gt1$, then $b(x)$ is maximal when $z(x)\leqslant1-n^{-1}$ is maximal, that is, for every $x\ne0$, $b(x)\leqslant c(a)$ with $ c(a)=pa+(1-p)a^{-1}n^{-1}+(1-p)(1-n^{-1}). $ Hence $(M_t)_{t\leqslant T}$ is a positive supermartingale, where $M_t=a^{|X_t|}c(a)^{-t}$. In particular, $\mathrm E(M_T)\leqslant M_0$, that is, for every $x\ne0$, $ \mathrm E_x(c(a)^{-T})\leqslant a^{|x|}. $ If, for some $a\gt1$, $c(a)\lt1$, this yields the exponential control $ \mathrm P_x(T\geqslant t)\leqslant c(a)^t\mathrm E_x(c(a)^{-T})\leqslant c(a)^ta^{|x|}, $ hence we are done since one can relate the distribution of $T$ starting from $X_0=0$ to the distribution of $T$ starting from any neighbour $x$ of $0$, namely, $ \mathrm P_0(T\geqslant t+1)=p\mathrm P_x(T\geqslant t)\leqslant pac(a)^t. $ Finally, $c(1)=1$ and $c'(1)=p-(1-p)n^{-1}$ hence we are done for every $p\lt1/(n+1)$.