This is follow up question on this: How does $ \sum_{p
Does the correctness of Riemann's Hypothesis imply a better bound on \sum \limits_{p?
-
0Sorry, that's beyond me. – 2012-01-01
3 Answers
The key to the proof in my other answer was the quantitative prime number theorem $\pi(x)=\text{li}(x)+O\left(xe^{-c\sqrt{\log x}}\right),\ \ \ \ \ \ \ \ \ \ (0)$ along with partial summation. Because we can use partial summation, all that really matters is the case $s=0$, and this case, which is looking at $\pi(x)$, tells us about everything else. The Riemann Hypothesis implies that $\pi(x)=\text{li}(x)+O\left(x^{\frac{1}{2}}\log x\right),\ \ \ \ \ \ \ \ \ \ \ \ (1)$ and we will look at why this is true later on. For now, lets look at the consequence, and what happens to the sum $\sum_{p\leq x}p^{-s}$. Going back to the other proof, the error term was just $t^{-s}\left(\pi(t)-\text{li}(t)\right)\biggr|_{2}^{x}+s\int_{2}^{x}t^{-s-1}\left(\pi(t)-\text{li}(t)\right)dt$ which after substituting $(1)$ becomes $O\left(x^{-\text{Re}(s)+\frac{1}{2}}\log x+|s|\int_{2}^{x}t^{-\text{Re}(s)-\frac{1}{2}}\log tdt\right).$ The integral is then $\ll\frac{|s|}{|\text{Re}(s)-\frac{1}{2}|}x^{-\text{Re}(s)+\frac{1}{2}}\log x$ so that for $\text{Re}(s)\neq\frac{1}{2}$, $\text{Re}(s)<1$, $\sum_{p\leq x}p^{-s}=\text{li}\left(x^{1-s}\right)+O\left(\frac{|s|}{|\text{Re}(s)-\frac{1}{2}|}x^{-\text{Re}(s)+\frac{1}{2}}\log x\right).$ The cases, $\text{Re}(s)=\frac{1}{2}$ and $\text{Re}(s)=1$ are special and must be dealt with separately. For example $\sum_{p\leq x}p^{-\frac{1}{2}+i\gamma}=\text{li}\left(x^{\frac{1}{2}-i\gamma}\right)+O\left(|\gamma|\log^{2}x\right).$ (We do not consider $\text{Re}s>1$, since the series converges absolutely there.) Notice that if I choose $\epsilon>0$ we can actually remove the denominator concerning $|s-\frac{1}{2}|$. This is done by looking at the two cases, and then taking minimums so the error depends only on $\epsilon$. In particular $\sum_{p\leq x}p^{-s}=\text{li}\left(x^{1-s}\right)+O_\epsilon\left(|s|x^{-\text{Re}(s)+\frac{1}{2}+\epsilon}\right).$
Remark: I realized that in my last post I might of been a bit careless about complex $s$. Some real parts need to be put in for the bounding to make sense, and $|s|$ in some places as well, all of which can be ignored for real $s$.
Why do we have equation (1)? This is quite an important question, and I won't give a complete answer here. For a complete proof see Titchmarsh's book, or Montgomery and Vaughn's Multiplicative number theorem.
Using some complex analysis (we need some lemmas bounding certain things so everything works out nicely) we can prove that $ \sum_{p^k\leq x} \log p=x-\sum_{\rho:\zeta(\rho)=0}\frac{x^\rho}{\rho}-\frac{\zeta'(0)}{\zeta(0)}. $
The left hand side is a step function which jumps on the prime powers (often written as $\psi(x)=\sum_{n\leq x}\Lambda(n)$ whereas the right hand side is a continuous function plus a sum over all of the zeros of the function zeta function. The zeros magically conspire at prime powers to make this conditionally convergent series suddenly jump. We can remove the trivial zeros and create an error bounded by $\log x$, so that this sum really depends on the zeros of zeta. Specifically, if we can bound the real part of the zeros, then we can bound this error term. (Being careful about convergence and all that, and taking certain limits properly) The best bound possible is $\text{Re}(s)=\frac {1}{2}$, which is why the best error will be just slightly larger then $\sqrt{x}$. (About $\log^2x$ larger) Using partial summation then takes us to a bound for $\pi(x)$, in particular we get $(1)$.
I hope this gives an idea why it is true, I suggest looking in some of those books. Another good question to ask is why does equation $(0)$ hold? This requires even more time to prove, as we need construct a zero free region for $\zeta(s)$. (Again this will be in Montgomery and Vaughn's book)
Hope that helps,
-
0And for $Re(s)=1/2$ and $\gamma = 0$, is $li(\sqrt{x}) + O(x^\epsilon )$ the best asymptotic behaviour possible under the RH? Because here $O(\log^2 x)$ makes no sense. Or would it be $O(1)$? – 2017-03-27
The exact answer is theorem IV due to von Mangoldt and reproduced in Landau"s book "Handbuch der Lehre von der Verteilung der Primzahlen", visible at Google books.
-
0I have a paper entitled "The Riemann Hypothesis concerning the zeta function" that contributes to solve this problem but I don´t know how to send it to you. My e-mail aldopperetti@gmail.com – 2017-12-29
Consider the analogous problem for $ \sum_{p^k < x} \frac{\log p}{p^{ks}} $ Perron's formula tells us that this is equal to $ \frac{1}{2\pi i} \int_{c - i \infty}^{c + i \infty} -\frac{\zeta'}{\zeta} ( s + w) \frac{x^w}{w} dw $ Shifting contours we collect a pole at $w = 1 - s$ and a pole at every zero of $\zeta(s)$ that is when $w = \rho - s$ with $\rho$ a zero of $\zeta(s)$. Therefore (heuristically) we derive the following formula $ \sum_{p^k < x} \frac{\log p}{p^{ks}} = \frac{x^{1 -s}}{1 - s} - \sum_{\rho} \frac{x^{\rho - s}}{\rho - s} $ There are some delicate convergence issues here, and also we are missing some small negligible terms, but for simplicity let's not worry about them.
The way to think about this formula is as follows: The main term is $x^{1-s}/(1-s)$. When the imaginary part of $s$ is very small the main term dominates and the contribution of the zeros is negligible. When the imaginary part of $s$ is very large one sees only the zeros and in particular those zeros $\rho$ with imaginary part close to the imaginary part of $s$ because of the factor $\rho - s$ in the denominator. There is also an medium range in which both the zeros and the main term contribute.
Note also that the Riemann Hypothesis always gives a good bound for this sum since then the real part of all the zeros is then $1/2$. Provided that $|\Re s - 1/2| > \epsilon$, the Riemann Hypothesis gives the bound $ \frac{x^{1-s}}{1-s} + O(x^{1/2 - \Re s} x^{\varepsilon} + x^{\varepsilon}) $ This does not immediately follow from the heuristic formula above. Notice that this is better than what follows from integration by parts (if we did what Eric did) as soon as $|s| > x^{\varepsilon}$.
You can pass from this formula to $ \sum_{p < x} p^{-s} $ by integration by parts and some simple bounds for the prime powers.