62
$\begingroup$

How does one prove that a simple (steps of length $1$ in directions parallel to the axes) symmetric (each possible direction is equally likely) random walk in $1$ or $2$ dimensions returns to the origin with probability $1$?

Edit: note that while returning to the origin is guaranteed $(p = 1)$ in $1$ and $2$ dimensions, it is not guaranteed in higher dimensions; this means that something in a correct justification for the $1$- or $2$-d case must fail to extend to $3$-d (or fail when the probability for each direction drops from $\frac14$ to $\frac16$).

  • 0
    A Modern Course in Statistical Physics 2nd Edition by L. E. REICHL equation 4.101 has the final result. The proof is there.2015-07-29

9 Answers 9

43

See Durrett, Probability: Theory and Examples (link goes to online copy of the fourth edition). On p. 164 Durrett gives a proof that simple random walk is recurrent in two dimensions.

First find the probability that simple random walk in one dimension is at $0$ after $2n$ steps; this is clearly $\rho_1(2n) = \binom{2n}{n}/2^{2n}$, since $\binom{2n}{n}$ is the number of paths with $n$ right steps and $n$ left steps.

Next, the probability that simple random walk in two dimensions -- call this $\rho_2(2n)$ -- is at $0$ after $2n$ steps is the square of the previous probability. Consider the simple random walk which makes steps to the northeast, northwest, southeast, and southwest with equal probability. The projections of this walk onto the x- and y-axes are independent simple random walks in one dimension. Rotating and rescaling gives the "usual" SRW in two dimensions (with steps north, east, south and west) and doesn't change the probability of being at $0$.

So $\rho_2(2n) = \left(\binom{2n}{n}/2^{2n}\right)^2$. This is asymptotic to $1/(\pi n)$, and the expected number of returns to the origin is the $\sum_{n \geq 1} \rho_2(2n)$, so the expected number of returns to the origin is infinite. It's not hard to show (and is in fact true, but the proof is unenlightening so I'll leave it to Durrett) that in this case the probability of eventually returning to the origin is $1$.

  • 1
    For d>2 the probability goes like at least $n^{-3/2}$. One can now apply the Borel-Cantelli lemma to show that with probability $1$ the random walk does not return to the origin an infinite number of times.2014-04-03
13

I'll try it for 2D and then we can get 1D as a corollary [excercise!]...

This is the only proof I know of, there may be a more intuitive (and less messy without tex!) proof out there, but I like this one- it uses generating functions in a really nifty way.

Consider the probability of being at the origin after 2n steps (notice we cannot return in an odd number of steps):

$u_0=1$

$u_{2n} = p(n,n,0,0)+ p(n-1,n-1,1,1)+...+p(n-k,n-k,k,k)+...+p(0,0,n,n)$ (when $n\neq0$)

Here $p(u,d,l,r)$ is the probability of the first 2n steps being u up, d down, l left and r right in any order. Each order has probability $\frac{1}{4^{2n}}$, and there are $\frac{(2n)!}{u!d!l!r!}$ distinct orders, giving $p(n-k,n-k,k,k)=\frac{1}{4^{2n}} \frac{(2n)!}{(n-k)!(n-k)!k!k!}$

Now, since $(2n)!=n! n! \binom{2n}{n}$ we have $p(n-k,n-k,k,k)=\frac{1}{4^{2n}} \binom{2n}{n} \binom{n}{k}^2$ giving

$u_{2n}= \frac{1}{4^{2n}} \Sigma_k \binom{2n}{n} \binom{n}{k}^2$ which, by one of those silly binomial results, can be contracted to

$u_{2n}= \frac{1}{4^{2n}} \binom{2n}{n}^2$

Let us put that in our back pocket for now and consider instead the probability of first returning after 2n steps $f_{2n}$ --- this is rather difficult to tackle directly but we can make ourselves a cunning little formula involving it, jazz out some generating function fun and seal the deal with some.

The formula in question is: $u_{2n}= f_2 u_{2n-2}+ f_4 u_{2n-4} +....+f_{2n-2}u_{2} + f_{2n}$

Which we shall not prove so much as explain: to return to the origin after 2n steps (LHS) you must either first return after 2 steps and do a 'return to the origin in 2n-2 steps walk' (first term RHS) or first return after 4 steps and do a 'return to the origin in 2n-4 steps walk' (2nd term) or... or first return after 2n-2 steps and do a 'return to the origin in 2 steps walk or first return to the origin after 2n steps.

We shall now tweak our formula just a tiny bit so it has the right symmetry properties for what is to come, we do this by adding $f_0=0$ and $u_0=1$ to give

$u_{2n}= f_0 u_{2n}+ f_2 u_{2n-2}+ f_4 u_{2n-4} +....+f_{2n-2}u_{2} + f_{2n}u_0$

Which is secretly a statement about generating functions (this is the sweet bit!), see look at the generating functions of $u_n$, $f_n$:

$U(x)= \Sigma_m u_{2m} x^{2m}$, $F(x)= \Sigma_m f_{2m} x^{2m}$

We see:

$U(x)= 1+ U(x)F(x)$

(where the '1+' is to compensate for the fact that $u_0$ does not appear in the product) Rearranging to:

$F(x)= \frac{U(x)-1}{U(x)}$

Observe that the probability of return is $\Sigma f_n= F(1)$, which comes out as 1 because $U(1)$ diverges by some tedious stirlings formula bounds that I forget.

Edit: Until Tex comes online, this is pretty unreadable, so here's a link to some lecture notes I found with the same proof (and, fortunately, the same notation!). Enjoy!

Edit$^2$: Hooray, Tex has come online!!! Enjoy.

  • 0
    @analystic $\binom{n}{k}$ is always equal to $\frac{n!}{k!(n-k)!}$.2015-11-30
8

Let $X_n = 1$ if at the $n^{th}$ step we are back at origin else it is $0$

$\mathbb{E}(X_n)$ is the probability of return after $n$ steps = $P_n$

Let $X = \displaystyle \sum_{n=1}^{\infty} X_n$ counts the number of returns

$\mathbb{E}(X) = \displaystyle \sum_{n=1}^{\infty} \mathbb{E}(X_n)$

Expected number of returns = $\mathbb{E}(X) = \displaystyle \sum_{n=1}^{\infty} P_n$

Note that $P_n = 0$ if $n$ is odd

Let $\rho$ be the probability that the random walk returns (at-least once) to $0$

$\rho_k$ be the probability that we return exactly $k$ times to the origin

$\rho_0 = 1 -\rho$ and in general, $\rho_k = \rho^k (1- \rho)$

$\mathbb{E}(X) = \displaystyle \sum_{k=0}^{\infty} k \rho^k (1-\rho) = \frac{\rho}{1-\rho}$ if $\rho < 1$

Hence, from the above we see that,

If $\rho < 1$, then $\mathbb{E}$(Number of returns) is finite i.e. $\displaystyle \sum_{n=1}^{\infty} p_n < \infty$

If $\rho = 1$, then $\mathbb{E}$(Number of returns) is infinite i.e. $\displaystyle \sum_{n=1}^{\infty} p_n = \infty$

In one dimensions, we have $p_{2n+1} = 0$ and $p_{2n} = \frac1{2^{2n}} \binom{2n}{n}$. Note that the middle binomial coefficient is the largest and hence $\frac1{2^{2n}} \binom{2n}{n} \geq \frac1{2n+1}$ and hence the $\displaystyle \sum_{n=1}^{\infty} p_{2n}$ diverges

Or we could use the stirling's formula to get a better estimate and get $p_{2n} \approx \frac{c}{\sqrt{n}}$ and hence the sum diverges

In two dimensions, we have $p_{2n} = \frac1{4^{2n}} \displaystyle \sum_{k=0}^{n} \binom{2n}{k} \binom{2n-k}{k} \binom{2n-2k}{n-k} = \frac1{4^{2n}} \displaystyle \sum_{k=0}^{n} \frac{(2n)!}{k! k! (n-k)! (n-k)!}$

Note the maximum is at $k = \frac{n}{2}$ and by Central Limit Theorem you can approximately say that intervals of length $\sim \sqrt{n}$ around $\frac{n}{2}$ where this is of largest size

Hence, each such $k$ contributes $\sim \frac{\sqrt{n}}{(\sqrt{n})^4} = \frac1{n^{3/2}}$ (By stirling's formula).

Number of $k$ is of the order of $\sqrt{n}$ and hence $p_{2n} \sim \frac1{n}$ and so again $p_n \rightarrow \infty$ since the harmonic sum diverges

In general, $p_n$ is about the size $\frac{c}{n^{d/2}}$ so $\displaystyle \sum_{n=1}^{\infty} p_n < \infty$ if $d > 2$. This is from the convergence and divergence of the p-series. $\displaystyle \sum \mathbb{E}(X_n) < \infty \iff \mathbb{E}(X) < \infty \iff \rho < 1 \implies$ Probability(return infinitely often) = 0

2

I'll do 1D.

1D walks are building binary strings, 010101, etc.

Say take six steps. Then 111111 is just as likely as 101010.

However, how many of the possible sequences have six ones? 1. How many of the possibly sequences have three ones and three zeros? Much more.

That number is called multiplicity, and it grows mighty fast. In the limit its log becomes Shannon entropy.

Sequences are equally likely, but combinations are not.

In the limit the combinations with maximum entropy are going dominate all the rest. So the walk is going to have gone an equal number of right and left steps...almost surely.

  • 0
    http://en.wikipedia.or$g$/wiki/Random_walk#Higher_dimensions which links to http://mathworld.wolfram.com/PolyasRandomWalkConstants.html2010-07-23
2

I can prove the 1 dimensional case a bit more formally than Jonathan. First we only look at the absolute values. Let us try to calculate the probability of this never exceeding x. The probability of 2x+1 consecutive moves all being the same is some p>0. If this ever occurs, then the absolute value will exceed x. Consider n groups of 2x+1 moves, the probability that at least one of these is all the same is 1-(1-p)^n, which approaches 1. So the probability of reaching each absolute value x (other than 0) is 1.

Now, lets consider the probability of reaching 0 again. Without loss of generality, suppose our first move is +1. We have a 100% chance of reaching a point a distance of 1 from this. There is a 50% that the first such point is 0 and 50% that it is 2. From 2, we have a 100% chance of reaching a point two away from this. 50% chance that this is 0, 50% that this is 4. Repeating, it is easy to see that we have to reach 0 again.

Furthermore, after we have reached an absolute value x, the probability of reaching it again is 1. So we reach each absolute value an infinite number of times. Due to symmetry, we can expect to reach each x an infinite number of times.

EDIT: I originally tried to apply it to the 2d case as you can see below. This is incorrect, as even though there will be no maximal absolute value, there is no reason we will necessarily come back down.

Similar reasoning will work in the 2d case. Instead of using symmetry, we simply note that each point with the same Manhattan distance from 0 has an expected value that is a non-zero proportion of the expected value all points a particular Manhattan distance away.

  • 0
    Okay, I think I see it; I think I'm just being exceptionally dense at the moment and need to crunch through it on paper... in the morning.2010-07-23
1

If you'd need only the 1d case, you can get this statement from the Law of the iterated logarithm.

This does not give much combinatorial insight, and well... requires said law, which is a little more work to proof.

1

$P =$ probability that if you go to $1$, you will return to $0$.

$Q =$ probability that if you go to $2$ you will return to $0$.

Assuming you start at $0$, you will either go to $1$ or to $-1$, so finding the probability $P$ for $1$ is the same as the probability if you go to $-1$.

From $P$, there are $2$ possibilities:

1) you can go immediately back to $0$

2) you can go to $2$.

So, $P = \frac{1}{2} +\frac{1}{2} * Q$

What is $Q$:

From $Q$, you have $2$ possibilities:

1) You can go even further out.

2) You can return to $0$. What it the probability you will return to 0?

Well, the probability that you will return to $1$ is $P$ because returning to $1$ from $2$ is the same probability as returning to $0$ from $1$. And, you will have to return to $1$ before you can return to $0$.

Therefore:
\begin{align} P & = \frac{1}{2} + \frac{1}{2} P^2\\ 2P & = 1+P^2\\ P^2 – 2P + 1 & = 0\\ (P – 1) (P – 1) & = 0\\ P & = 1.\end{align}

  • 0
    P = 1/2 + 1/2P^2 ---- 2P = 1 = P^2 ---- P^2 - 2P +1 = 0 ---- (P-1)^2 = 1 ---- P = 12011-03-14
1

Since my reputation is less than 50 and my suggested edit has been rejected. I am adding explanation to why $ U(x) = 1 + U(x)F(x) $ . This answer assumes background from Tom's answer.

$ U(x) = \sum_{m=0}^{\infty} u_{2m} x^{m} $, $F(x) = \sum_{k=1}^{\infty} f_{2k} x^{k} $

It is trivial that $ u_0 = 1 $.

$ U(x)F(x) = (\sum_{m=1}^{\infty} u_{2m} x^{m}) (\sum_{k=1}^{\infty} f_{2k} x^{k}) $

$ U(x)F(x) = \sum_{n=1}^{\infty} (\sum_{m=1}^{n} f_{2m} u_{2n-2m}) x^n $

$ U(x)F(x) = \sum_{n=1}^{\infty} u_{2n} x^{n} $

$ U(x)F(x) = U(x) - 1 $

NOTE: Proof of $ U(x) = 1 + U(x)F(x) $ is cited from here Go to page 3.

0

For the one dimensional case

Assume there is a non-zero probability p that 0, the starting point is the leftmost point of the walk. Then there is a 1 - p chance the walk lands on -1 at some point. Given that it lands on -1 there is a p chance that -1 turns out to be the leftmost point. Continuing in this way we end up with a geometric distribution for the leftmost point of the walk over 0, -1, -2 ... which sums to 1

So if there is a nonzero probability that 0 is the leftmost then there is a 1 probability that there is a leftmost point. Same goes for rightmost point. This implies a probability 1 that the walk is bounded on the left and right.

But the probability that the walk bounded on the left and the right must be zero. Because after trimming down the boundaries we would find an upper and lower limit which the walk lands on infinitely many times and bounces back in the same direction every time. This has a probability of 0.

So it is impossible to have a nonzero probability that the origin is the leftmost point. From there is follows easily that the walk hits every point infinitely many times with probability 1.

For the 2 dimensional case just consider the "shells" of walking distance n around the origin. Every step moves to an adjacent shell either inward or outwards with bounded probabilities.