According to "Concrete Mathematics" on page 434, elementary asymptotic methods show that $\displaystyle \sum_{k=1}^{n-1}\frac{k! \; k^{n-k}}{n!}$ is asymptotically $\sqrt{\frac{\pi n}{2}}$. Does anybody see how to show this?
How to show that $\sum\limits_{k=1}^{n-1}\frac{k!k^{n-k}}{n!}$ is asymptotically $\sqrt{\frac{\pi n}{2}}$?
-
0In the second edition, it's on page 448. – 2011-03-28
-
1I think you can find this somewhere in Flajolet and Sedgewick, Analytic Combinatorics. (At least I'm sure they quote the result; I'm not sure if they derive it, but if they don't derive it they give a source that does.) This book is available online. But I don't have time to track down the reference more precisely, so this comment may be of little help as it's an 800-page book. – 2011-03-28
2 Answers
The sum can asymptotically be replaced by an integral $$\sum_{k=1}^{n-1}\frac{k! \; k^{n-k}}{n!} \sim \int_1^n dk\, \frac{k! k^{n-k}}{n!}.$$ The integral can be approximated by its saddle point (which here coincides with a boundary). For that first use Stirling $$k! k^{n-k} \sim k^n e^{-k} = e^{-k +n \log k}.$$ The maximum of $-k+n \log k$ is attained at $k=n$ such that we can expand the integrand around this value and obtain $$\frac{k! \; k^{n-k}}{n!} \approx e^{-(k-n)^2/2n}$$ up to second order in the exponent. Thereby $$\sum_{k=1}^{n-1}\frac{k! \; k^{n-k}}{n!} \sim \int_1^n dk\, \frac{k! k^{n-k}}{n!} \sim \int_{-\infty}^n dk\, e^{- (k-n)^2/2n} = \sqrt{\frac{n \pi}{2}}. $$
Edit: as an answer to the comment of Didier Piau I try to be a bit more rigorous.
To estimate the integrand, we expand the exponent around the saddle point. This amounts to finding the maximum of $$\log \left[\frac{k! \; k^{n-k}}{n!} \right] \sim (n-k) \log k + k (\log k -1) - n (\log n -1).$$ Taking the derivative with respect to $k$ yields $(n-k)/k$ with the maximum at $n= k$. Expanding around this maximum to second order in $k-n$, we obtain $$\log \left[\frac{k! \; k^{n-k}}{n!} \right] \sim -\frac{(k-n)^2}{2n}.$$
What Didier Piau observed is a subdominant linear term ($i/2n$ in his comment). This leads to a shift in the position of the maximum but the curvature stays the same. In fact, the first nonvanishing term around the maximum is always second order as the linear term vanishes by definition. To make this more explicit, we have to include the next term in the Stirling expansion $$\log \left[\frac{k! \; k^{n-k}}{n!} \right] \sim (n-k) \log k + k (\log k -1) + \frac{1}{2} \log k - n (\log n -1) - \frac{1}{2} \log n.$$ The maximum is now obtained at $k^*= (2n -1)/2$ (exactly due to the shift noted by Didier Piau). Expanding to second order around this point yields $$\log \left[\frac{k! \; k^{n-k}}{n!} \right] \sim- \frac{(k - k^*)^2}{2n+1}.$$ The shifts of order 1 in the position of the maximum and in the curvature do not affect the result in leading order.
-
0It would be of course interesting to find the next term in the asymptotic expansion... – 2011-03-28
-
1+1 Some related explanation here: http://en.wikipedia.org/wiki/Laplace%27s_method – 2011-03-28
-
0@leonbloy: thank you for the link. This will help people to understand the answer... – 2011-03-28
-
1To me, the part with the $\approx$ sign is a little too light on the side of rigor... For example, $k=n-i$ with $i\ge1$ fixed yields $1-(i^2+i)/(2n)+o(1/n)$ on the LHS and $\exp(-i^2/2n)=1-i^2/(2n)+o(1/n)$ on the RHS. – 2011-03-28
-
0@Didier Piau: the linear term $i/2n$ leads to a shift in the position of the maximum but the curvature remains the same (see the edited answer). The sum is dominated by the upper boundary. All what we need to know is the width of the peak in leading order in $n$. – 2011-03-28
-
1Thanks for the edit. Unfortunately, I do not understand this shift thing: the maximum is exactly at $k=n$, and not anywhere else. Somewhat related to this, I still do not get whether you are extending the sequence of ratios indexed by $k$ to a function defined on the real numbers or not, and if you are, how you are doing it. Call me obdurate if you wish but as far as I am concerned, there is still way to go before I can consider this is a proof. :-) – 2011-03-28
-
0@Didier Piau: it is not intended as a proof. The maximum of the sum is at $k=n$, the maximum of the integral not. To go from the sum to the integral you can apply Euler–Maclaurin (of course you have to convince yourself that the integral is more important than the other terms). – 2011-03-28
-
0@Fabian Your proposal is not a proof per se, I have no problem with that. To go in the direction of a more assured result, you could precise what is the function you integrate and to which you suggest to apply Euler-Maclaurin, if I read you correctly. Note that Euler-Maclaurin applies to the integral of a function independent on the bound $n$ of the integral while your function depends on $n$. – 2011-03-28
-
0@Didier Piau: $n$ is fixed (but large) -- then you apply Euler-Maclaurin for the sum over $k$ which becomes an integral... The function which I integrate is $\Gamma(k+1) k^{n-k}/\Gamma(n+1)$. An excellent (more rigorous) overview of the method can be found in Bender, Orszag: Advanced mathematical methods for scientists and engineers especially chapter 6.7. – 2011-03-28
-
0Thanks for all the help. I was expecting something simpler when they said " elementary asymptotic methods", but this will certainly do. – 2011-03-29
-
2@Fabian OK, once again: 1) Euler-Maclaurin has a rest term, which one must control. This is (not trivial and) not done here. 2) The function $x\mapsto\Gamma(x+1)x^{n-x}$ is maximum where its derivative is zero. To compute this derivative and to solve for the point where it is zero is (a difficult task and) not done here. 3) Bender-Orszag is excellent but its invocation cannot replace solving 1) and 2). In view of this, I wonder how high is your confidence in the value $\sqrt{\pi/2}$ of the constant. Sorry, life is hard... – 2011-03-30
-
1@Fabian One last point: the sentence in your post which asserts that I *observed a subdominant linear term* (whatever that means) is misleading and does not correspond to what I wrote. – 2011-03-30
-
0I agree with Didier Piau that there are some gaps in this argument (although I do think it captures the essence of what is going on). I've posted as a separate answer Knuth's own derivation. – 2011-08-24
-
2@PeterR: Often mathematicians will use "elementary" to mean "does not use complex analysis". As you can see, "elementary" does not mean "easy." – 2013-06-15
I don't think the accepted answer is sufficiently rigorous, so I dug up Knuth's derivation in The Art of Computer Programming, Vol. 1 (3rd ed.), Section 1.2.11.3, pp. 119-120. Here's a summary of his argument.
Use Stirling's approximation on $k!$ to get $$\sum_{k=0}^n \frac{k^{n-k} k!}{n!} = \frac{\sqrt{2\pi}}{n!}\sum_{k=0}^n k^{n+1/2} e^{-k}\left(1 + O(k^{-1})\right).$$
Then, use the Euler-Maclaurin formula to obtain
$$\sum_{k=0}^n k^{n+1/2}e^{-k} = \int_0^n x^{n+1/2} e^{-x} dx + \frac{1}{2} n^{n+1/2}e^{-n} + \frac{1}{24}n^{n-1/2}e^{-n} - R,$$ where $R$ is the remainder term and can be shown to be $O(n^n e^{-n})$.
Since the integral is an incomplete gamma function, we have $$\sum_{k=0}^n k^{n+1/2}e^{-k} = \gamma \left(n+\frac{3}{2},n\right) + O(n^{n+1/2}e^{-n}).$$
In a two-page analysis earlier in the section, Knuth shows that for large values of $x$ and fixed $y$, $$\frac{\gamma (x+1,x+y)}{\Gamma(x+1)} = \frac{1}{2} + O\left(\frac{y}{\sqrt{x}}\right).$$
Putting this all together yields $$\sum_{k=0}^n \frac{k^{n-k} k!}{n!} = \frac{\sqrt{2\pi}}{\Gamma(n+1)} \left[\Gamma\left(n+\frac{3}{2}\right)\left(\frac{1}{2} + O\left(\frac{1}{\sqrt{n}}\right)\right) + O\left(\Gamma\left(n+\frac{1}{2}\right)\right)\right].$$ Since, for fixed $a$ and $b$ (see, for instance, here), $$\frac{\Gamma(n+a)}{\Gamma(n+b)} = n^{a-b}\left(1 + O(n^{-1})\right),$$ we finally get $$\sum_{k=0}^n \frac{k^{n-k} k!}{n!} =\sqrt{ \frac{\pi n}{2}} + O(1).$$
Knuth says, "Our derivations... use only simple techniques of elementary calculus." So perhaps this is what is meant by "elementary asymptotic methods" in Concrete Mathematics, rather than "this derivation is easy." :)
Knuth also describes more advanced techniques whereby you can get the more precise estimate
$$\sum_{k=0}^n \frac{k^{n-k} k!}{n!} = \sqrt{ \frac{\pi n}{2}} - \frac{2}{3} + \frac{11}{24} \sqrt{\frac{\pi}{2n}} + \frac{4}{135n} - \frac{71}{1152} \sqrt{\frac{\pi}{2n^3}} + O(n^{-2}).$$
-
0Is the more advanced technique also from 'The Art of Computer Programming'? – 2011-09-17
-
0@sigma.z.1980: I don't have my copy with me now. I think he does describe the techniques, but he doesn't go into the detail that he does with the first-order estimate. – 2011-09-17