What is $\sum_{i=1}^n |\{k \in \mathbb{N} \mid k | i\}|$ asymptotically (as a function of $n$)?
(I'm summing, for each of $1,\dotsc,n$, its number of divisors)
Or at least, what's the best upper bound you can find?
What is $\sum_{i=1}^n |\{k \in \mathbb{N} \mid k | i\}|$ asymptotically (as a function of $n$)?
(I'm summing, for each of $1,\dotsc,n$, its number of divisors)
Or at least, what's the best upper bound you can find?
Let $d(n)$ denote the number of divisors of $n$.
Dirichlet's Asymptotic Formula. (Theorem 3.3 in Tom Apostol's Introduction to Analytic Number Theory) For all $x\geq 1$, we have $\sum_{n\leq x} d(n) = x \log x + (2C-1)x + O(\sqrt{x}),$ where $C$ is Euler's constant, $C = \lim_{n\to\infty}\left(1 + \frac{1}{2} + \frac{1}{3}+\cdots+\frac{1}{n}-\log n\right).$
The proof involves writing the sum as $\sum_{n\leq x}d(n) = \sum_{d\leq x} \sum_{q\leq x/d} 1,$ and interpreting it as counting lattice points in the $qd$ plane; the lattice points with $qd=n$ lie on a hyperbola; then one can get that $\sum_{q\leq x/d} 1 = \frac{x}{d} + O(1),$ and from this and the fact that $\sum_{n\leq x}n^{\alpha} = \frac{x^{\alpha+1}}{\alpha+1} + O(x^{\alpha}),\qquad \text{if }\alpha\geq 0,$ one derives that $\sum_{n\leq x}d(n) = x\log x + O(x).$ The more precise formula uses the symmetry of the corresponding hyperbola along $q=d$. Apostol's book has all the details.
The error term $O(\sqrt{x})$ can be improved: you can see a summary in Wikipedia; best estimate so far is due to Huxley, who proved the error term is $O(x^{(131/416)+\epsilon})$ for all $\epsilon\gt 0$. Hardy and Landau proved that the smallest exponent possible is at least $\frac{1}{4}$.
You can find a good summary in the following Wikipedia article.
The main term in the estimate for $\sum_{i=1}^n d(i)$ is $n\log n +(2\gamma -1)n,$ where $\gamma$ is Euler's $\gamma$.
There is a large literature on the error term. Dirichlet got $O(n^{1/2})$, and it is known that one cannot do better than $O(n^{1/4})$.
Note: For a number-theoretic function, this is a fantastically precise estimate. We have a dominant term, $n\log n$, that gives us the asymptotic behaviour. We have also an explicitly known correction term $(2\gamma-1)n$. And, by number-theoretic estimate standards, the error term is quite small compared to $(2\gamma-1)n$.