22
$\begingroup$

Let $z_{1},z_{2},\ldots,z_{n}$ be i.i.d random points in the unit circle ($|z_i|=1$) with uniform distribution of their angles. Consider the random polynomial $P(z)$ given by $$ P(z)=\prod_{i=1}^{n}(z-z_i). $$

Let $m$ be the maximum absolute value of $P(z)$ on the unit circle $m=\max\{|P(z)|:|z|=1\}$.

How can I estimate $m$? More specifically, I would like to prove that there exist $\alpha>0$ such that the following holds almost surely as $n\to\infty$ $$ m\geq e^{\alpha\sqrt{n}}. $$

Any idea of what can be useful here?

  • 1
    You want a lower bound on the maximum?2011-04-27
  • 2
    Take all $z_i=0$. Then $P(z)=z^n$ with the maximum $m=1$. So your bound cannot be true...2011-04-27
  • 0
    Did you consider any special cases such as (1) $z_1=z_2=\ldots =z_n = -1 + \varepsilon$, $\varepsilon>0$ or (2) $z_k=e^{i\dot 2\pi k/n}$, $k=1,\ldots,n$?2011-04-27
  • 1
    @Fabian, $z_i$ need to be on the circle, $0$ is not on the unit circle.2011-04-27
  • 0
    The points are on the circle2011-04-27
  • 0
    @Fabian you are right let me restate the question to be more specific.2011-04-27
  • 0
    @Thomas: Why they are on the circle? (I read in the circle)2011-04-27
  • 0
    @gatu: why do you think the inequality should hold?2011-04-27
  • 0
    @Fabian: I did some simulations that suggest that the inequality should hold.2011-04-27
  • 0
    @gatu: Nice problem! I think I can get exponent $\frac{1}{2} +\epsilon$ for any positive $\epsilon$, where you are looking for $\frac{1}{2}$.2011-04-28
  • 0
    @User6312: That's good! What is the idea behind?2011-04-28
  • 0
    This reminds me of the *Kahane polynomials* - these are polynomials of degree $n$ having coefficients on the unit circle and a sup norm like $\sqrt{n} + \varepsilon$ (Note the $\ell^2$- norm of such polynomial is $\sqrt{n}$). The difference in this case is, of course, that we are looking at the zeros here...2011-04-28
  • 0
    @gatu: There was a serious mistake in my argument. Perhaps it can be fixed, but I am not optimistic.2011-04-28
  • 0
    To me, this problem doesn't really seem probabilistic. Isn't it the same thing to simply say "for any choice of $n$ points on the unit circle, $\max_{|z|=1} |\prod(z-z_i)| \ge e^{\alpha\sqrt{n}}$"?2011-04-28
  • 1
    @Hans: no, "almost surely as $n\to\infty$" is not the same "always for finite $n$". @ght: There seems to be a recurring misunderstanding of your phrase "in the unit circle" as "on the unit circle" -- perhaps it would be preferable to write "in the unit disk"?2011-04-28
  • 0
    @joriki: Hm, I didn't notice the phrase "as $n\to\infty$". Not that it can immediately tell what difference it makes, but perhaps if I brush up my probability skills a little... :-) And yes, "in the unit disk" or "inside the unit circle" is much clearer than "in the unit circle" (I, for example, thought it meant $|z|=1$).2011-04-28
  • 0
    @Hans: As an example, if you throw $n$ coins, you will almost surely throw tails at least once as $n\to\infty$, but it's not true for any finite $n$ that you will always throw tails at least once.2011-04-28
  • 1
    @Hans and @joriki: I want the points $z_{1},z_{2},\ldots,z_{n}$ to be in the unit circle, meaning that $|z_{i}|=1$ for all $i$. Let me clarify this further in the question.2011-04-28
  • 1
    @ght: Then "on the unit circle" would probably be the clearest.2011-04-28
  • 0
    @joriki: In the case at hand, the estimate is false if you look at any fixed $n$, since $P(z)=z^n+1$ has its roots on the unit circle and has maximum modulus equal to 2 (independently of $n$). So if I understand correctly, one would need to show (roughly speaking) that the probability of coming close to such a symmetric arrangement of the roots (or to any arrangement which violates the desired estimate) tends to zero as $n \to \infty$?2011-04-29
  • 0
    @Hans: Exactly.2011-04-29

3 Answers 3

5

The logarithm of your objective function is

$$\sum_{i=1}^n \ln\lvert z - z_i\rvert\;.$$

If you think of the points as continuously distributed for large $n$, this becomes $n$ times the average

$$\frac{1}{\pi}\int_{D_1} \ln\lvert x-x' \rvert \mathrm d^2x'\;.$$

This is the potential of a uniformly charged disk at distance $|x|$ from the centre, within the disk. It can be evaluated, as in the three-dimensional case of a uniformly charged sphere, by splitting the integral into a part for the rings interior to $x$, for which the potential has to be evaluated at radius $|x|$, and a part for the rings exterior to $x$, for which the potential has to be evaluated at the radius of the rings:

$$ \begin{eqnarray} \frac{1}{\pi}\int_{D_1} \ln\lvert x-x' \rvert \mathrm d^2x' &=& 2\left(\int_0^{|x|}\ln|x|r\mathrm dr+\int_{|x|}^1\ln r r\mathrm dr\right)\\ &=&\frac{1}{2}\left(|x|^2-1\right)\;. \end{eqnarray} $$

So the average contribution from a uniform density of points is zero at the boundary and negative everywhere else. Does that fit with your numerical experiments? It suggests that the maximum should typically be near the boundary for large $n$.

It also suggests that you can think of the square root term in your conjecture as arising from fluctuations around this uniform density, so the conjecture could be rephrased as something like "Almost surely, there will be local fluctuations in the density of the points to create a fluctuation of $\alpha\sqrt{n}$ in the sum of the logarithms at some point on the boundary." I'll give some more thought to how this might be made rigorous.

  • 1
    I don't really see what information we obtain from your argument. You said "It suggests that the maximum should typically be near the boundary for large $n$", but the maximum of the polynomial $P(z)$ in the unit disk occurs at the unit circle by the maximum principle always.2011-04-28
  • 0
    @gatu: Well, that goes to show I'm not overly familiar with complex analysis :-) I was hoping to contribute another perspective; it may or may not point towards a solution. However, the argument not only suggests that the maximum occurs at the boundary but also indicates why one might expect the logarithm to go with $\sqrt{n}$ and not with $n$: because the average is $0$. But now perhaps you'll tell me there's a theorem in complex analysis asserting this, anyway :-)2011-04-28
1

This seems $\textbf{somewhat related}$ to a Miklos Schweitzer $(1972)$ problem.

$\textbf{Problem 6.}$ Let $P(z)$ be a polynomial of degree $n$ with complex co-efficients, $$P(0)=1, \quad \ \text{and} \ |P(z)| \leq M \ \ \text{for} \ |z| \leq 1.$$ Prove that every root of $P(z)$ in the closed unit disc has multiplicity at most $c\sqrt{n}$ where $c=c(M) > 0$ is a constant depending only on $M$.

$\textbf{Solution 1.}$ (As given in "Contests in Higher Mathematics" by : Gabor J.Szekely). It is sufficient to examine the multiplicity of number $1$. Infact, if we prove something for $1$ then we may appy the result to the polynomial $p(z)=P(\alpha z)$ with $|\alpha| \leq 1$, and in this way, we obtain the same estimate for all roots lying in the unit disc. The idea of the solution is the following. We consider the integral $$F(P) = \int\limits_{0}^{2\pi} \log{|P(e^{i\phi})|} \ d\phi$$ and show that it exists and is non-negative. Then we estimate it from above, once in the neighborhood of $1$ with the aid of multiplicity of $1$ and the degree of $P$, and once at other points using the condition $|P(z)| \leq M$.

It is sufficient to prove the existence of the integral for polynomials of the form $z-z_{0}$. If $$P(z)= c \cdot\prod\limits_{i=1}^{n}(z-z_{i})$$ then $$\log{|P(z)|} = \log{|c|} + \sum\limits_{i=1}^{n} \log{|z-z_{i}|}$$

The existence of $$\int\limits_{0}^{2\pi} \log{|e^{i\phi}-z_{0}|}\ d\phi$$ is evident if $|z_{0}| \neq 1$. Next let $|z_0|=1$. Without loss of generality, we may assume that $z_{0}=1$ (a substitution $\phi=n+\phi_{0}$ takes them into each other). Then $$|e^{i\phi}-1| = \left\lvert\:2\sin\frac{\phi}{2}\right\rvert$$ and $$\int\limits_{0}^{2\pi} \log\:\left\lvert\:2\sin\frac{\phi}{2}\right\rvert \ d\phi$$ really exists and is equal to $0$.

Next, compute the integral $$f(\alpha)= \int\limits_{0}^{2\pi} \log\biggl|1-\frac{e^{i\phi}}{\alpha}\biggr| \ d\phi \qquad\qquad (\alpha\neq 0)$$ for $\alpha \neq 1$. Obviously, its value depends on the absolute value of $\alpha$ only (again, a subsitution as above may be applied), so it is the same for the numbers $\alpha\epsilon_{1},\alpha\epsilon_{2},\cdots, \alpha\epsilon_{n}$ where the $\epsilon_{j}$ are the $n$-th unit roots. Therefore, $$n f(\alpha) = \sum\limits_{j=1}^{n} f(\alpha\epsilon_{j})=\int\limits_{0}^{2\pi} \log\:\Biggl| \prod\limits_{j=1}^{n} \biggl(1-\frac{e^{i\phi}}{\alpha\epsilon_{j}}\:\biggr)\Biggr| \ d\phi$$

Now if $|\alpha|>1$, then for $n\to\infty$ the integral on the right hand side tends to zero since $1-e^{in\phi}/\alpha^{n} \to 1$ uniformly; thus $f(\alpha)=0$. On, the other hand if $|\alpha| <1$ then $$n(f(\alpha)+2\pi\log{|\alpha|})=\int\limits_{0}^{2\pi} \log{|\alpha^{n}-e^{in\phi}|} \ d\phi \to 0$$ since $1-|\alpha|^{n} < |\alpha^{n}-e^{in\phi}|<1+|\alpha|^{n}$; so in this case $f(\alpha)=-2\pi \log{|\alpha|}$ is only possible. In each of the three case, we have $f(\alpha) \geq 0$. In our case, the relation $P(0)=1$ implies, $$P(z)=\prod\limits_{j=1}^{n} \biggl(1-\frac{z}{z_j}\biggr)$$ whence $$F(P)= \sum\limits_{j=1}^{n} f(z_j) \geq 0 \qquad (1)$$

Now let $P(z)=(z-1)^{k} Q(z)=a_{0}+a_{1}z+\cdots +a_{n}z^{n}$, where $Q(1)\neq 0$. We estimate $F(P)$ with the help of $k$, $n$, and $M$. Let $$F(P)=\int\limits_{0}^{2\pi} = \int\limits_{-\epsilon}^{\epsilon} + \int\limits_{\epsilon}^{2\pi-\epsilon} = F_{1}+F_{2}$$

Then $$F_{2} \leq \int\limits_{0}^{2\pi} \log{M} \ d\phi = 2 \cdot \pi \cdot \log{M} \qquad (2)$$

We split $F_{1}$ again into two parts: $$F_{1} = \int\limits_{-\epsilon}^{\epsilon} \log{|(z-1)^{k}|} \ d\phi + \int\limits_{-\epsilon}^{\epsilon} \log|Q(e^{i\phi})| \ d\phi=F_{3}+F_{4} \qquad (3)$$

Clearly \begin{align*} F_{3} &= k \int\limits_{-\epsilon}^{\epsilon} \log\Bigl(2\cdot\sin\frac{|\phi|}{2}\Bigr) < 2k\int\limits_{0}^{\epsilon} \log\phi \ d\phi \\ &=2k\epsilon \cdot (\log\epsilon -1) \qquad (4) \end{align*}

For estimating $F_{4}$ we need an estimate of $Q$, which we obtain from the co-efficients of the expansion of $Q$ about $1$. Let $Q(1+z)= R(z)$. We calculate the co-efficients of $R$ from those of $P$ using the formula $R(z)= \frac{P(z+1)}{z^{k}}$

\begin{align*} P(z+1) &=\sum\limits_{j=0}^{n} a_{j}(z+1)^{j} =\sum\limits_{j=0}^{n}\sum\limits_{m=0}^{j}a_{j}\cdot {j \choose m} \cdot z^{m} \\ &=\sum\limits_{m=k}^{n} z^{m} \cdot \sum\limits_{j=m}^{n} a_{j} \cdot {j \choose m} \end{align*} since for $m < k$, the coefficient of $z^{m}$ is $0$ by our assumption. Thus $$R(z)=\sum\limits_{m=0}^{n-k}b_{m}z^{m}, \qquad b_{m}=\sum\limits_{j=m+k}a_{j} {j \choose m+k} \qquad (5)$$

Further, by the cauchy inequalities, $|a_{j}|=|P^{(j)}(0)|/j! \leq M$ Putting this into $(5)$, we find, $$|b_{m}| \leq M \sum\limits_{j=m+k}^{n} {j \choose m+k} = M {n+1 \choose m+k+1} \qquad (6)$$

If $|z|=\delta$ and $\delta(n-k)/(k+2) < 1$, then in view of $(6)$,

\begin{align*} \frac{|R(z)|}{M} & \leq \sum\limits_{m=0}^{\infty} \delta^{m} {n+1 \choose m+k+1} \\ &={n+1 \choose k+1} \cdot \Bigl(1+\delta \frac{n-k}{k+2} + \delta^{2}\frac{n-k}{k+2}\cdot\frac{n-k-1}{k+3} + \cdots \Bigr) \\ &\leq {n+1 \choose k+1} \sum\limits_{j=0}^{\infty} \Bigl(\delta \frac{n-2}{k+2}\Bigr)^{j} = {n+1 \choose k+1} \frac{1}{1-\delta \frac{n-k}{k+2}} \end{align*}

Since $|e^{i\phi}-1|=2|\sin\frac{\phi}{2}| \leq |\phi|$, therefore if $\epsilon < ((k+2)/2(n-k))$, then for $|\phi| \leq \epsilon$ we have $$|Q(e^{i\phi})|= |R(e^{i\phi}-1)| \leq M {n+1 \choose k+1}\frac{1}{1-\epsilon\frac{n-k}{k+2}} < 2M {n+1 \choose k+1}$$ If $k\geq 2$, $t! > t^{t}e^{-t}$ we obtain,

\begin{align*} {n+1 \choose k+1} &= \frac{(n+1) \cdot n \cdot (n-1) \cdots (n-k+1)}{(k+1)!} \\ &= \frac{(n^{2}-1)\cdot n \cdot (n-2) \cdot (n-k+1)}{(k+1)!} < \frac{n^{k+1}}{(k+1)!} < \Bigl(\frac{en}{k+1}\Bigr)^{k+1} \end{align*}

Thus $$\log{|Q(e^{i\phi})|} < \log{2M} + (k+1)\log\frac{en}{k+1}$$ and consequently $$F_{4} < 2\epsilon \biggl[ \log(2M) + (k+1)\log\frac{en}{k+1}\biggr]$$

Collecting everything, by $(1)$,$(2)$,$(3)$ and $(4)$ $$(\pi+\epsilon)\log{M} + \epsilon\log\frac{2en}{k+1} + \epsilon k \log\frac{en}{k+1} \geq 0 \qquad (7)$$

Now if $n=k^{2}/2c$, let $\epsilon = c/k$ (this fulfills the condition $\epsilon < ((k+2)/2(n-k))$). Then $(7)$ becomes, $$\Bigl(\pi+\frac{c}{k}\Bigr) \log{M} + \frac{c}{k} \log\frac{\epsilon k^{2}}{c(k+1)} + c \log\frac{k}{2(k+1)} \geq 0$$

We only make things worse if we also replace $k+1$ by $k$. Moreover, $c/k=k/2n \leq \frac{1}{2}$ gives $\pi+\frac{c}{k} < 4$, $c/k \log(ek/c) < (c/k) \cdot (ek/c)=e < 3$, So finally $$4\log{M} + 3 \geq c \log{2}, \qquad (8)$$ $$c < \log{M}+6$$ Since $k=\sqrt{2cn}$, relation $(8)$ means that we have proved the assertion of the problem with $$c(M)=\sqrt{16\log{M}+12}$$

  • 0
    @Joriki: Dear Joriki, may I know what was your edit. The "rollback" option seems to me missing :(2011-07-16
  • 0
    I changed "is evident if $z_0\neq1$" to "is evident if $|z_0|\neq1$" (added the modulus). I'm also about to correct the two equations preceding "really exists and is equal to 0.". I hope this is what you intended; please let me know if I'm making mistakes.2011-07-16
  • 0
    @Joriki: No.You are absolutely spot on. Thanks.2011-07-16
  • 0
    Your calculation of $$\int\limits_{0}^{2\pi} \log\biggl|1-\frac{e^{i\phi}}{\alpha}\biggr| \ d\phi$$ is nice, but this is easier to get as a special case of Gauss' law in two dimensions (see my answer and the one it links to for details).2011-07-16
  • 1
    I've worked through most of this now (and corrected a couple more mistakes; please check) -- but I don't see how it contributes to answering the question?2011-07-16
  • 0
    @Joriki: I just posted this answer, because the it seemed as if, (to me atleast) as though the questions are related. Shall I remove this answer. I cross posted this at MO as well.2011-07-16
  • 0
    No, I didn't mean to imply that you should remove it -- but the first sentence, "This seems to be a Miklos Schweitzer (1972) problem." sounds like you think this is directly related -- I'd precede it by a short paragraph explaining why you think it's relevant.2011-07-16
  • 0
    @Chandru: Thank you for posting this problem. Can you please explain how is this "new problem" related, or even an answer for my question?2011-07-17
0

I believe the conjecture that $\sup_\theta \prod_j |\theta - z_j| \gg e^{c\sqrt{n}}$ almost surely is not true. It would imply $\sup_\theta \sum_j \log |\theta - z_j| \gg \sqrt{n}$. But from the analogy with the absolute value of a Brownian motion $|B_t|$, $t \in [0,1]$, I think one can only say $\sup_\theta \frac{1}{\sqrt{n}}\sum_j \log |\theta - z_j| > 0$ almost surely.

Consider the stochastic process $F(\zeta) = \frac{1}{\sqrt{n}}\sum_j \log|\zeta - z_j|$, where $|z_j| = 1$ and $|\zeta| = 1$. I claim $F$ converges a Gaussian process, in the sense that any finite dimensional distribution is jointly Gaussian. This can be established with moment method of CLT, using only mixed 2nd moments, since those terms dominate the sum for small moment powers, together with the fact that $\frac{1}{2\pi} \int_0^{2\pi} (\log(2(1- \cos t)))^k dt \ll k! / 2^k$, so that the tail of the Taylor expansion vanishes. The covariance structure of this process doesn't seem to admit closed form, but my Mathematica plot shows it's convex with value between 1 and $-1/2$ (for two antipodal $\zeta$'s). $$ G(\eta) := \frac{1}{2\pi} \int_0^{2\pi} \log(2(1 + \cos \theta)) \log(2(1 + \cos(\theta + \eta))) d \theta.$$ $$ C(\eta) := G(\eta) / G(0); \qquad G(0) = \frac{\pi^2}{3}.$$ Now we can borrow ideas from James Lee's lecture notes to get an upper bound of $\sup F(\zeta)$ by Slepian comparison with some better understood Gaussian processes.

enter image description here