37
$\begingroup$

If a function $f : \mathbb Z\times \mathbb Z \rightarrow \mathbb{R}^{+} $ satisfies the following condition

$\forall x, y \in \mathbb{Z}, f(x,y) = \dfrac{f(x + 1, y)+f(x, y + 1) + f(x - 1, y) +f(x, y - 1)}{4}$

then is $f$ constant function?

  • 1
    @girdav You are right. The lower bound is probably enough.2011-07-17

3 Answers 3

37

You can prove this with probability.

Let $(X_n)$ be the simple symmetric random walk on $\mathbb{Z}^2$. Since $f$ is harmonic, the process $M_n:=f(X_n)$ is a martingale. Because $f\geq 0$, the process $M_n$ is a non-negative martingale and so must converge almost surely by the Martingale Convergence Theorem. That is, we have $M_n\to M_\infty$ almost surely.

But $(X_n)$ is irreducible and recurrent and so visits every state infinitely often. Thus (with probability one) $f(X_n)$ takes on every $f$ value infinitely often.

Thus $f$ is a constant function, since the sequence $M_n=f(X_n)$ can't take on distinct values infinitely often and still converge.

  • 3
    If you know that the real part of an entire function $f(z)$ is non-negative on the complex plane, what can you say about the function $g(z)=f(z)/(1+f(z))$?2011-07-18
12

I can give a proof for the d-dimensional case, if $f\colon\mathbb{Z}^d\to\mathbb{R}^+$ is harmonic then it is constant. The following based on a quick proof that I mentioned in the comments to the same (closed) question on MathOverflow, Liouville property in Zd. [Edit: I updated the proof, using a random walk, to simplify it]

First, as $f(x)$ is equal to the average of the values of $f$ over the $2d$ nearest neighbours of $x$, we have the inequality $f(x)\ge(2d)^{-1}f(y)$ whenever $x,y$ are nearest neighbours. If $\Vert x\Vert_1$ is the length of the shortest path from $x$ to 0 (the taxicab metric, or $L^1$ norm), this gives $f(x)\le(2d)^{\Vert x\Vert_1}f(0)$. Now let $X_n$ be a simple symmetric random walk in $\mathbb{Z}^d$ starting from the origin and, independently, let $T$ be a random variable with support the nonnegative integers such that $\mathbb{E}[(2d)^{2T}] < \infty$. Then, $X_T$ has support $\mathbb{Z}^d$ and $\mathbb{E}[f(X_T)]=f(0)$, $\mathbb{E}[f(X_T)^2]\le\mathbb{E}[(2d)^{2T}]f(0)^2$ for nonnegative harmonic $f$. By compactness, we can choose $f$ with $f(0)=1$ to maximize $\Vert f\Vert_2\equiv\mathbb{E}[f(X_T)^2]^{1/2}$.

Writing $e_i$ for the unit vector in direction $i$, set $f_i^\pm(x)=f(x\pm e_i)/f(\pm e_i)$. Then, $f$ is equal to a convex combination of $f^+_i$ and $f^-_i$ over $i=1,\ldots,d$. Also, by construction, $\Vert f\Vert_2\ge\Vert f^\pm_i\Vert_2$. Comparing with the triangle inequality, we must have equality here, and $f$ is proportional to $f^\pm_i$. This means that there are are constants $K_i > 0$ such that $f(x+e_i)=K_if(x)$. The average of $f$ on the $2d$ nearest neighbours of the origin is $ \frac{1}{2d}\sum_{i=1}^d(K_i+1/K_i). $ However, for positive $K$, $K+K^{-1}\ge2$ with equality iff $K=1$. So, $K_i=1$ and $f$ is constant.

Now, if $g$ is a positive harmonic function, then $\tilde g(x)\equiv g(x)/g(0)$ satisfies $\mathbb{E}[\tilde g(X_T)]=1$. So, $ {\rm Var}(\tilde g(X_T))=\mathbb{E}[\tilde g(X_T)^2]-1\le\mathbb{E}[f(X_T)^2]-1=0, $ and $\tilde g$ is constant.

  • 2
    Note: A similar proof will also show that harmonic $f\colon\mathbb{R}^d\to\mathbb{R}^+$ is constant. Interestingly, in the two dimensional case, Byron's proof can be modified to show that harmonic $f\colon\mathbb{R}^2\setminus\{0\}\to\mathbb{R}^+$ is constant (as 2d Brownian motion has zero probability of hitting 0 at positive times). Neither of the proofs generalize to harmonic $f\colon\mathbb{R}^d\setminus\{0\}\to\mathbb{R}^+$ for $d\not=2$. In fact, considering $f(x)=\Vert x\Vert^{2-d}$, we see that $f$ need not be constant for $d\not=2$.2011-07-17
7

Here is an elementary proof assuming we have bounds for $f$ on both sides.

Define a random walk on $\mathbb{Z}^2$ which, at each step, stays put with probability $1/2$ and moves to each of the four neighboring vertices with probability $1/8$. Let $p_k(u,v)$ be the probability that the walk travels from $(m,n)$ to $(m+u, n+v)$ in $k$ steps. Then, for any $(m, n)$ and $k$, we have $f(m, n) = \sum_{(u,v) \in \mathbb{Z}^2} p_k(u,v) f(m+u,n+v).$ So $f(m+1, n) - f(m, n) = \sum_{(u,v) \in \mathbb{Z}^2} \left( p_k(u-1,v) - p_k(u,v) \right) f(m+u,n+v).$ If we can show that $\lim_{k \to \infty} \sum_{(u,v) \in \mathbb{Z}^2} \left| p_k(u-1,v) - p_k(u,v) \right| =0 \quad (\ast)$ we deduce that $f(m+1,n) = f(m,n)$ and we win.

Remark: More generally, we could stay put with probability $p$ and travel to each neighbor with probability $(1-p)/4$. If we choose $p$ too small, then $p_k(u,v)$ tends to be larger for $u+v$ even then for $u+v$ odd, rather than depending "smoothly" on $(u,v)$. I believe that $(\ast)$ is true for any $p>0$, but this elementary proof only works for $p > 1/3$. For concreteness, we'll stick to $p=1/2$.

We study $p_k(u,v)$ using the generating function expression $\left( \frac{x+x^{-1}+y+y^{-1}+4}{8} \right)^k = \sum_{u,v} p_k(u,v) x^u y^v.$

Lemma: For fixed $v$, the quantity $p(u,v)$ increases as $u$ climbs from $-\infty$ up to $0$, and then decreases as $u$ continues climbing from $0$ to $\infty$.

Proof: We see that $\sum_u p_k(u,v) x^u$ is a positive sum of Laurent polynomials of the form $(x/8+1/2+x^{-1}/8)^j$. So it suffices to prove the same thing for the coefficients of this Laurent polynomial. In other words, writing $(x^2+8x+1)^k = \sum e_i x^i$, we want to prove that $e_i$ is unimodal with largest value in the center. Now, $e_i$ is the $i$-th elementary symmetric function in $j$ copies of $4+\sqrt{15}$ and $j$ copies of $4-\sqrt{15}$. By Newton's inequalities, $e_i^2 \geq \frac{i (2j-i)}{(i+1)(2j-i+1)} e_{i-1} e_{i+1} > e_{i-1} e_{i+1}$ so $e_i$ is unimodal; by symmetry, the largest value is in the center. (The condition $p>1/3$ in the above remark is when the quadratic has real roots.) $\square$

Corollary: $\sum_u \left| p_k(u-1,v) - p_k(u,v) \right| = 2 p_k(0,v).$

Proof: The above lemma tells us the signs of all the absolute values; the sum is \begin{multline*} \cdots + (p_k(-1,v) - p_{k}(-2,v)) + (p_k(0,v) - p_{k}(-1,v)) + \\ (p_k(0,v) - p_k(1,v)) + (p_k(1,v) - p_k(2,v)) + \cdots = 2 p_k(0,v). \qquad \square\end{multline*}

So, in order to prove $(\ast)$, we must show that $\lim_{k \to \infty} \sum_v p_k(0,v)=0$. In other words, we must show that the coefficient of $x^0$ in $\left( \frac{x}{8}+\frac{3}{4} + \frac{x^{-1}}{8} \right)^k$ goes to $0$.

There are probably a zillion ways to do this; here a probabilistic one. We are rolling an $8$-sided die $k$ times, and we want the probability that the numbers of ones and twos are precisely equal. The probability that we roll fewer than $k/5$ ones and twos approaches $0$ by the law of large numbers (which can be proved elementarily by, for example, Chebyshev's inequality). If we roll $2r > k/5$ ones and twos, the probability that we exactly the same number of ones and twos is $2^{-2r} \binom{2r}{r} < \frac{1}{\sqrt{\pi r}} < \frac{1}{\sqrt{\pi k/10}}$ which approaches $0$ as $k \to \infty$. See here for elementary proofs of the bound on $\binom{2r}{r}$.

I wrote this in two dimensions, but the same proof works in any number of dimensions

  • 0
    Hi, I know this is an old post (and an amazing solution). Do you maybe have a few words as to the motivation you had when solving it? I have 3 questions (sorry the formatting won't let me enter): 1. What was the motivation to approach it with a random walk? 2.Did you try small case and conjecture the unimodularity for a fixed v? Or is there a good heuristic? Thanks in advance, I love your answers :)2017-09-23