53
$\begingroup$

Suppose we want to show that $$ n! \sim \sqrt{2 \pi} n^{n+(1/2)}e^{-n}$$

Instead we could show that $$\lim_{n \to \infty} \frac{n!}{n^{n+(1/2)}e^{-n}} = C$$ where $C$ is a constant. Maybe $C = \sqrt{2 \pi}$.

What is a good way of doing this? Could we use L'Hopital's Rule? Or maybe take the log of both sides (e.g., compute the limit of the log of the quantity)? So for example do the following $$\lim_{n \to \infty} \log \left[\frac{n!}{n^{n+(1/2)}e^{-n}} \right] = \log C$$

  • 13
    Keith Conrad has a good explanation of this. http://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=1&ved=0CB0QFjAA&url=http%3A%2F%2Fwww.math.uconn.edu%2F~kconrad%2Fblurbs%2Fanalysis%2Fstirling.pdf&ei=Q0n7Tuk_gaC3B7PU5M8G&usg=AFQjCNFDngMkFgLhRO9M9ujPAPRv89x96Q2011-12-28
  • 7
    Stirling's formula is a pretty hefty result, so the tools involved are going to go beyond things like routine application of L'Hopital's rule, although I am sure there is a way of doing it that involves L'Hopital's rule as a step. I've just scanned the link posted by jspecter and it looks good and reasonably elementary. Another pretty elementary treatment is Terry Tao's: http://terrytao.wordpress.com/2010/01/02/254a-notes-0a-stirlings-formula/. The two treatments I've taken the time to work through both involved doing residue calculus on the gamma function.2011-12-28
  • 4
    James: are you interested in proving that C exists or in computing its value? For the former, pretty standard procedures based on an estimation of the ratio of two consecutive terms are enough.2011-12-28
  • 2
    There's a proof along this line in [The Number $\pi$](http://www.ams.org/bookstore-getitem/item=tnp) by Eymard and Lafon. (A very nice book, btw.)2011-12-31
  • 0
    There is an exercise, or a sequence of exercises, in Spivak's *Calculus* that guides you through a proof. I don't have it on hand or know the details, but I believe it goes through the Euler–Maclaurin formula.2011-12-31
  • 0
    Short elementary proofs due to [Diaconis and Freedman](http://ocw.nctu.edu.tw/course/fourier/supplement/Stirling.pdf) and to [Patin](http://ocw.nctu.edu.tw/course/fourier/supplement/Stirling-1.pdf) were published in *The American Mathematical Monthly*.2012-01-01
  • 0
    Patin's paper is already mentioned in @Byron's answer.2012-01-02
  • 2
    Dan Romik, Stirling's Approximation for n!: The Ultimate Short Proof?, at [JSTOR](http://www.jstor.org/stable/2589351).2012-01-03
  • 0
    It is a dismaying that nobody has mentioned in this thread that Abraham de Moivre was the first to show that $$\lim_{n\to\infty} \frac{n^{n+1/2}/e^n}{n!}$$ is a positive number and to calculate the number numerically, and then James Stirling showed that it is $\sqrt{2\pi}$. ${}\qquad{}$2016-01-26

9 Answers 9

71

A proof I found a while ago entirely relies on creative telescoping. Since $\frac{1}{n^2}-\frac{1}{n(n+1)}=\frac{1}{n^2(n+1)}$,

$$\begin{eqnarray*} \sum_{n\geq m}\frac{1}{n^2}&=&\sum_{n\geq m}\left(\frac{1}{n}-\frac{1}{(n+1)}\right)+\frac{1}{2}\sum_{n\geq m}\left(\frac{1}{n^2}-\frac{1}{(n+1)^2}\right)\\&+&\frac{1}{6}\sum_{n\geq m}\left(\frac{1}{n^3}-\frac{1}{(n+1)^3}\right)-\frac{1}{6}\sum_{n\geq m}\frac{1}{n^3(n+1)^3}\tag{1}\end{eqnarray*} $$ hence: $$ \psi'(m)=\sum_{n\geq m}\frac{1}{n^2}\leq \frac{1}{m}+\frac{1}{2m^2}+\frac{1}{6m^3}\tag{2}$$ and in a similar fashion: $$ \psi'(m) \geq \frac{1}{m}+\frac{1}{2m^2}+\frac{1}{6m^3}-\frac{1}{30m^5}.\tag{3}$$ Integrating twice, we have that $\log\Gamma(m)$ behaves like: $$ \log\Gamma(m)\approx\left(m-\frac{1}{2}\right)\log(m)-\color{red}{\alpha} m+\color{blue}{\beta}+\frac{1}{12m}\tag{4}$$ where $\color{red}{\alpha=1}$ follows from $\log\Gamma(m+1)-\log\Gamma(m)=\log m$.

That gives Stirling's inequality up to a multiplicative constant.

$\color{blue}{\beta=\log\sqrt{2\pi}}$ then follows from Legendre's duplication formula and the well-known identity:

$$ \Gamma\left(\frac{1}{2}\right)=2 \int_{0}^{+\infty}e^{-x^2}\,dx = \sqrt{\pi}.\tag{5}$$


Addendum: if we apply creative telescoping like in the second part of this answer, i.e. by noticing that $k(x)=\frac{60x^2-60x+31}{60x^3-90x^2+66x-18}$ gives $k(x)-k(x+1)=\frac{1}{x^2}+O\left(\frac{1}{x^8}\right)$, we arrive at

$$\begin{eqnarray*} m!&\approx& 2^{\frac{37-32m}{42}}e^{\frac{1}{84} \left(42-\sqrt{35} \pi -84 m+2 \sqrt{35} \arctan\left[\sqrt{\frac{5}{7}} (2m-1)\right]\right)} \\ &\cdot&\sqrt{\pi}\, m\,(2m-1)^{\frac{8}{21}(2m-1)}\left(m^2-m+\frac{3}{5}\right)^{\frac{5}{84}(2m-1)}\tag{6}\end{eqnarray*} $$ that is much more accurate than the "usual" Stirling's inequality, but also way less "practical".
However, it might be fun to plug in different values of $m$ in $(6)$ to derive bizarre approximate identities involving $e,\pi,\sqrt{\pi}$ and values of the arctangent function, like

$$ \sqrt{\pi}\,\exp\left[-\frac{1}{42}\left(147+\sqrt{35} \arctan\frac{1}{\sqrt{35}}\right)\right]\approx 2^{\frac{19}{6}}3^{\frac{1}{6}}5^{\frac{5}{12}} 7^{-\frac{37}{12}}.\tag{7}$$

  • 7
    (+1) Nice rewriting of $\sum\limits_{n\ge m}\frac1{n^2}$. That makes for a short derivation of the approximation. My [original derivation](http://math.stackexchange.com/revisions/95454/1) of the asymptotic series was once much shorter than my current post, but I was nudged to add more details.2015-08-25
  • 3
    Nice one! ! !(+1)2016-02-25
45

Inspired by the two references below, here's a short proof stripped of motivation and details.

For $t>0$, define

$$ g_t(y) = \begin{cases} \displaystyle \left(1+\frac{y}{\sqrt{t}}\right)^{\!t} \,e^{-y\sqrt{t}} & \text{if } y>-\sqrt{t}, \\ 0 & \text{otherwise}. \end{cases} $$

It is not hard to show that $t\mapsto g_t(y)$ is decreasing for $y\geq 0$ and increasing for $y\leq 0$. The limit is $g_\infty(y)=\exp(-y^2/2)$, so dominated convergence gives

$$ \lim_{t\to\infty}\int_{-\infty}^\infty g_t(y)\,dy = \int_{-\infty}^\infty \exp(-y^2/2)\,dy = \sqrt{2\pi}. $$

Use the change of variables $x=y\sqrt{t}+t$ to get

$$ t! = \int_0^\infty x^t e^{-x}\,dx = \left(\frac{t}{e}\right)^t \sqrt{t} \int_{-\infty}^\infty g_t(y)\,dy. $$

References:

[1] J.M. Patin, A very short proof of Stirling's formula, American Mathematical Monthly 96 (1989), 41-42.

[2] Reinhard Michel, The $(n+1)$th proof of Stirling's formula, American Mathematical Monthly 115 (2008), 844-845.

  • 0
    Hi Professor Schmuland, can I ask you a quick question regarding Stirling's formula? If I am asked to show that the partial sums of $log(j) = nlogn - n + O(logn)$, and I started off with $\sum log(j) = log(n!) = log(\sqrt{2\pi n}(\frac{n}{e})^n)$ = ... and continued from here, do you think it's ok, and that I am not possibly using the result to prove the result? I am a little concerned about making this first move since on the Wikipedia page of Stirling's formula, the result that I am trying to prove shows up early in the discussion. Thanks for your time, @ByronSchmuland,2015-12-06
  • 0
    @Dr.MV Thanks for the kind words.2016-03-25
  • 0
    You're welcome. My pleasure. And thank you for this nice post. -Mark2016-03-25
19

The way I've usually shown Stirling's Asymptotic Approximation starts with the formula $$ \begin{align} n! &=\int_0^\infty x^ne^{-x}\;\mathrm{d}x\\ &=\int_0^\infty e^{-x+n\log(x)}\;\mathrm{d}x\\ &=\int_0^\infty e^{-nx+n\log(nx)}n\;\mathrm{d}x\\ &=n^{n+1}\int_0^\infty e^{-nx+n\log(x)}\;\mathrm{d}x\\ &=n^{n+1}e^{-n}\int_{-1}^\infty e^{-nx+n\log(1+x)}\;\mathrm{d}x\\ &=n^{n+1}e^{-n}\int_{-\infty}^\infty e^{-nu^2/2}x'\;\mathrm{d}u\tag{1} \end{align} $$ Where $u^2/2=x-\log(1+x)$. That is, $u(x)=x\sqrt{\frac{x-\log(1+x)}{x^2/2}}$, which is analytic near $x=0$: the singularity of $\frac{x-\log(1+x)}{x^2/2}$ at $x=0$ is removable, and since $\frac{x-\log(1+x)}{x^2/2}$ is near $1$ when $x$ is near $0$, $\sqrt{\frac{x-\log(1+x)}{x^2/2}}$ is analytic near $x=0$. Since $u(0)=0$ and $u'(0)=1$, the Lagrange Inversion Theorem says that $x$ is an analytic function of $u$ near $u=0$, with $x(0)=0$ and $x'(0)=1$.

Note that since $u^2/2=x-\log(1+x)$, we have $$ u(1+x)=xx'\tag{2} $$ from which it follows that $\lim\limits_{u\to+\infty}\dfrac{x'(u)}{u}=1$ and $\lim\limits_{u\to-\infty}\dfrac{x'(u)}{ue^{-u^2/2}}=-1$. That is, $x'(u)=O(u)$ for $|u|$ near $\infty$.

Because $x$ is an analytic function with $x(0)=0$ and $x'(0)=1$, $$ x=\sum_{k=1}^\infty a_ku^k\tag{3} $$ where $a_1=1$. Then, looking at the coefficient of $u^n$ in $(2)$, we get $$ \begin{align} a_{n-1} &=\sum_{k=1}^na_{n-k+1}ka_k\\ &=\sum_{k=1}^na_k(n-k+1)a_{n-k+1}\\ &=\frac{n+1}{2}\sum_{k=1}^na_ka_{n-k+1}\tag{4} \end{align} $$ Therefore, we get the recursion $$ a_n=\frac{a_{n-1}}{n+1}-\frac{1}{2}\sum_{k=2}^{n-1}a_ka_{n-k+1}\tag{5} $$ Thus, we get the beginning of the power series for $x(u)$ to be $$ x=u+\tfrac{1}{3}u^2+\tfrac{1}{36}u^3-\tfrac{1}{270}u^4+\tfrac{1}{4320}u^5+\tfrac{1}{17010}u^6-\tfrac{139}{5443200}u^7+\tfrac{1}{204120}u^8+\dots\tag{6} $$ Integration by parts yields the following two identities: $$ \int_{-\infty}^\infty e^{-nu^2}|u|^{2k+1}\;\mathrm{d}u=\frac{k!}{n^{k+1}}\tag{7a} $$ and $$ \int_{-\infty}^\infty e^{-nu^2}u^{2k}\;\mathrm{d}u=\frac{(2k-1)!!}{2^kn^{k+1/2}}\sqrt{\pi}\tag{7b} $$ Furthermore, we have the following tail estimates: $$ \begin{align} \int_{|u|>a}e^{-nu^2}\;\mathrm{d}u &=\int_a^\infty e^{-nu^2}u^{-1}\;\mathrm{d}u^2\\ &=\int_{a^2}^\infty e^{-nu}u^{-1/2}\;\mathrm{d}u\\ &\le\frac{1}{a}\int_{a^2}^\infty e^{-nu}\;\mathrm{d}u\\ &=\frac{1}{na}\int_{na^2}^\infty e^{-u}\;\mathrm{d}u\\ &=\frac{1}{na}e^{-na^2}\tag{8a} \end{align} $$ and for $k\ge1$, $$ \begin{align} \int_{|u|>a}e^{-nu^2}|u|^{k}\;\mathrm{d}u &=\int_a^\infty e^{-nu^2}u^{k-1}\;\mathrm{d}u^2\\ &=\int_{a^2}^\infty e^{-nu}u^{(k-1)/2}\;\mathrm{d}u\\ &=n^{-(k+1)/2}\int_{na^2}^\infty e^{-u}u^{(k-1)/2}\;\mathrm{d}u\\ &=n^{-(k+1)/2}\int_{na^2}^\infty e^{-u/2}e^{-u/2}u^{(k-1)/2}\;\mathrm{d}u\\ &\le n^{-(k+1)/2}\int_{na^2}^\infty e^{-u/2}\left(\frac{k-1}{e}\right)^{(k-1)/2}\;\mathrm{d}u\\ &=\frac{2}{n}\left(\frac{k-1}{ne}\right)^{(k-1)/2}e^{-na^2/2}\tag{8b} \end{align} $$ The tail estimates in $(8)$ are $O\left(\frac{1}{n}e^{-na^2/2}\right)$ for all $k\ge0$. That is, they decay faster than any power of $n$.

Define $$ f_k(u)=x'(u)-a_1-2a_2u-\dots-2ka_{2k}u^{2k-1}\tag{9} $$ Since $x(u)$ is analytic near $u=0$, $f_k(u)=O\left(u^{2k}\right)$ near $u=0$. Since $x'(u)=O(u)$ for $|u|$ near $\infty$, $f_k(u)=O\left(u^{2k-1}\right)$ for $|u|$ near $\infty$. Therefore, $$ \begin{align} \int_{-\infty}^\infty e^{-nu^2/2}f_k(u)\;\mathrm{d}u &=\int_{|u|< a}e^{-nu^2/2}f_k(u)\;\mathrm{d}u + \int_{|u|> a}e^{-nu^2/2}f_k(u)\;\mathrm{d}u\\ &\le\int_{|u|< a}e^{-nu^2/2}C_1u^{2k}\;\mathrm{d}u + \int_{|u|> a}e^{-nu^2/2}C_2|u|^{2k-1}\;\mathrm{d}u\\ &=O\left(\frac{1}{n^{k+1/2}}\right)\tag{10} \end{align} $$ Therefore, using $\mathrm{(7b)}$ and $(10)$, we get $$ \begin{align} n! &=n^{n+1}e^{-n}\int_{-\infty}^\infty e^{-nu^2/2}\left(1+\tfrac{1}{12}u^2+\tfrac{1}{864}u^4-\tfrac{139}{777600}u^6+f_4(u)\right)\;\mathrm{d}u\\ &+\;n^{n+1}e^{-n}\int_{-\infty}^\infty e^{-nu^2/2}\left(\tfrac{2}{3}u-\tfrac{2}{135}u^3+\tfrac{1}{2835}u^5+\tfrac{1}{25515}u^7\right)\;\mathrm{d}u\\ &=\sqrt{2n}\;n^ne^{-n}\left(\int_{-\infty}^\infty e^{-u^2}\left(1+\tfrac{1}{6n}u^2+\tfrac{1}{216n^2}u^4-\tfrac{139}{97200n^3}u^6\right)\;\mathrm{d}u+O\left(\frac{1}{n^4}\right)\right)\\ &=\sqrt{2\pi n}\;n^ne^{-n}\left(1+\frac{1}{12n}+\frac{1}{288n^2}-\frac{139}{51840n^3}+O\left(\frac{1}{n^4}\right)\right)\tag{11} \end{align} $$

  • 0
    The change of variable $x\to u$ and the step where you develop $x$ as a series in $u$ require some more justification, I believe. Are we supposed to understand that $x$ in $(-1,0)$ corresponds to $u\lt0$ and that $x\gt0$ corresponds to $u\gt0$?2011-12-31
  • 0
    @Didier: Yes; that was the intent. Since $u^2=x^2+x^3f(x)$, where $f$ is analytic near $0$, we can write $u=x(1+xf(x))^{1/2}=x(1+xg(x))$ where $g$ is analytic near $0$. Now, the setting should be right to use the inverse function theorem on $u=x+x^2g(x)$. I am not claiming that the power series for $x$ in terms of $u$ extends to all $u$, but as $n\to\infty$, we use a smaller and smaller neighborhood of $0$.2011-12-31
  • 5
    Sorry but I do not think I can buy this as a full proof.2011-12-31
  • 0
    @Didier: What is bothering you about my answer? Reverting $u^2/2=x-\log(1+x)$ should not be a problem. The power series may not converge everywhere, but all we need for the asymptotic expansion is that the even part of $x'$ is $1+\frac{1}{12}u^2+\frac{1}{864}u^4-\frac{139}{777600}u^6+O(u^8)$, which seems clear since $x'=u(1+1/x)$.2011-12-31
  • 3
    The main problem is that you apply tools valid for [formal power series](http://en.wikipedia.org/wiki/Formal_power_series#Inverting_series), to get results on functions coinciding with the sums of [power series](http://en.wikipedia.org/wiki/Power_series). Such a move is tempting and there are results which allow it, but in a strict setting only (I recommend pondering the first paragraph of the WP page on formal power series). Imagine for a moment that the recursion is solved by $a_k\sim k^k$. What next? Would you consider that the asymptotics $a_0+a_1/n+a_2/n^2+O(1/n^3)$ .../...2012-01-01
  • 2
    .../... is valid? Likewise: what are the dots in (6) and (8)? Some rests of converging series? The next terms of some asymptotic series? Something else? [Divergent series](http://fr.wikipedia.org/wiki/S%C3%A9rie_divergente) are a venerable subject but, here again, some caution is needed. Finally: when you write that *Thus, $x$ has a power series in $u$ like $x=0+u+\ldots$*, this probably means that, IF $x$ coincides with the sum of such a power series, then the beginning of the power series must be $0+u+$some higher order terms (perfectly correct). But you use it to .../...2012-01-01
  • 2
    .../... assert that $x$ IS a power series in $u$ (which is an entirely different matter). // All in all, it might be possible to address these points and to reach a rigorous solution based on this method, but your post does not do that, yet.2012-01-01
  • 10
    @Didier: I applied the tools valid for formal power series to get the coefficients of $x$ as a power series in $u$. I was not relying on them to show that $x$ was analytic in $u$; that is handled by Lagrange Inversion. I have now mentioned that explicitly in my answer. The dots in $(6)$ are the next terms in the power series for $x$. The first two sets of dots in $(9)$ are the next terms in the power series, while the last set of dots are the next terms in the asymptotic expansion. I have rearranged and amended my answer; does it now seem more complete?2012-01-03
  • 0
    It is definitely better, but, according to me, not a full proof yet. [An answer](http://math.stackexchange.com/a/96308/6179) to the main post explains why.2012-01-04
  • 0
    Aaaaahh... There are still some minor typos in the last version, but nothing the interested readers cannot correct themselves, I believe. I will soon delete my other post. Well done, @robjohn.2012-01-05
  • 0
    @Didier: I would like to correct any typos. I might as well post a better answer, so if you don't mind pointing out some, I will fix them.2012-01-05
  • 0
    Typo the singular, actually (sorry): *Since $x=O(u)$ for $|u|$ near $\infty$* should read *Since $x'=O(u)$ for $|u|$ near $\infty$* (right after (9)). You could also replace some $x$ and $x'$ by $x(u)$ and $x'(u)$ respectively, starting from the last equation in (1). Or not.2012-01-05
  • 0
    @Didier: thanks! I fixed the typo and expressed the fact that $x$ is a function of $u$ by changing $x$ to $x(u)$ in some places.2012-01-05
  • 0
    Hi @robjohn, if I want to use Stirling's approximation $n!$ is approx. $\sqrt{2\pi n}(\frac{n}{e})^n$, but I have to show a remainder term in my proof is in $O(log(n))$, then what remainder term does Stirling's approximation give? For example is $n! = \sqrt{2 \pi n} (\frac{n}{e})^n + O(n)$? Thanks,2015-12-06
  • 0
    I am basically proving out Stirling's approximation, although the problem statement doesn't actually say it is, so I hadn't realized until an MSE contributor told me about it. But the estimate required seems a bit sharper than what I have found on MSE -- I need to show that the partial sums of $log(j) = log(n!) = nlogn - n + 1 + O(logn)$, whereas a proof I found on MSE gives an estimate of $O(n)$ for the remainder term.2015-12-06
  • 0
    So, if I cannot do that - I've probably been trying too long at this point - then I will use Stirling's approximation instead. (My current attempt, not using the Stirling approx. for $n!$, is looking at the difference between sum and integral of $log(j)$, which almost works, but showing the $O(log(n))$ is difficult.) So I am just wondering what the big-O term $n!$ comes with, so that I know how to proceed with the arithemetic. Thanks, @robjohn2015-12-06
  • 1
    @LebronJames: if you look at $(11)$, you see that the "remainder term" of Stirling's approximation is $O\left(\frac{n^n}{e^n\sqrt{n}}\right)$.2015-12-06
  • 0
    Hi @robjohn, thanks for your response. Can I ask you one final question on this, if you don't mind? I was able to show the $nlogn - n + O(log(n))$, using the Abel summation formula. But, in comparing my work to another student's solution, it uses $\sum log(j) = log1 + log 2 + ... + logn = log(n!) = log(\sqrt{2 \pi n} (\frac{n}{e})^n) = ...$ and it continues from here. Is this student's solution incorrect?2015-12-07
  • 0
    I feel that this might be a case of using the result to prove the result. On Wikipedia's page, it says that $\sum log(j) = nlogn - n + O(logn)$ and the approximation $\sqrt{2 \pi n} (\frac{n}{e})^n$ are 'variants' of each other. What do you think? Thanks @robjohn,2015-12-07
  • 1
    @LebronJames: what is it you are trying to prove? First you ask if $n!=\sqrt{2\pi n}n^ne^{-n}+O(n)$, which is false. I gave you the "remainder term", which is *much* larger than $O(n)$. Then it appears you are looking at the log of $n!$: $\sum\limits_{k=1}^n\log(k)=n\log(n)-n+O(\log(n))$, which *is* true.2015-12-07
  • 0
    Hi @robjohn, so I am trying to prove the $nlog(n) - n + O(log(n))$, but on the L.H.S., I was wondering whether I could start with $\sum log(k) = log(n!) = log(\sqrt{2 \pi n})n^ne^{-n}$, and then proceed from here to show the $nlog(n) - n + O(log(n))$. Would this be circular logic, using Stirling's approximation on the L.H.S., when the R.H.S. is *also* Stirling's approximation, at least according to Wikipedia...2015-12-08
  • 0
    I've already proved it successfully, using Abel's summation instead, but was wondering whether the technique I saw in a solution is legitimate, @robjohn. I always try to think of a second or maybe third solution to a problem, out of curiosity, before moving on to the next problem...thanks,2015-12-08
  • 0
    But here Wikipedia isn't so explicit, or maybe I just don't know Stirling's formula well enough. Is it saying that the n! approximation and the nlogn - n + O(log(n)) are equivalent, when it says that they are "variants" of each other? If they are equivalent, then, using the n! approximation to show the nlogn-n+O(logn) is probably an incorrect approach. So, I'm just a little confused about this part, @robjohn, thanks,2015-12-08
19

Here is a derivation of Stirling's Formula as given by

$$\bbox[5px,border:2px solid #C0A000]{\sqrt{2\pi n}\left(\frac ne\right)^n\left(1+\frac{1}{12(n+1)}\right) \le n!\le \sqrt{2\pi n}\left(\frac ne\right)^n\left(1+\frac{1}{12(n-2)}\right)}\tag 1$$

In the ensuing development, we will use $(i)$ the formula for the trapezoidal rule for twice differentiable functions, $(ii)$ Wallis's Formula for $\pi$, and $(iii)$ standard inequalities.


USING THE TRAPEZOIDAL RULE:

To prove $(1)$, we begin with the trapezoidal rule formula

$$\int_a^b f(x)\,dx=\frac12\left(f(a)+f(b)\right)(b-a)-\frac1{12}f''(\xi)(b-a)^3\tag 2$$

for which $f''(x)$ exists for $x\in[a,b]$ and $\xi\in (a,b)$. Letting $f(x)=\log(x)$ in $(2)$ shows that

$$\begin{align} \int_1^n \log(x)\,dx&=\sum_{m=1}^{n-1} \int_m^{m+1}\log(x)\,dx\\\\ &=\sum_{m=1}^{n-1} \left(\frac12 (\log(m)+\log(m+1))+\frac1{12}\frac{1}{\xi_m^2}\right)\tag 3 \end{align}$$

where $m<\xi_m

Now, we can rewrite $(3)$ as

$$\log(n!)=\log\left(e^{1-C_n}\sqrt{n}\left(\frac{n}{e}\right)^n\right)$$

which becomes

$$\bbox[5px,border:2px solid #C0A000]{n!=e^{1-C_n}\sqrt{n}\left(\frac{n}{e}\right)^n }\tag 4$$


USING WALLIS'S PROUCT FORMULA:

To evaluate the limit $\lim_{n\to \infty}e^{1-C_n}$, we rely on Wallis's Product for $\pi$ as given by

$$\frac{\pi}{2}=\lim_{n\to \infty}\frac{2^{4n}(n!)^4}{(2n+1)((2n)!)^2}\tag 5$$

Using $(4)$ in $(5)$ shows that

$$\frac{\pi}{2}=\lim_{n\to \infty}\frac{n^2}{(2n)(2n+1)}\left(e^{1-2C_n+C_{2n}}\right)^2$$

from which we find that

$$\bbox[5px,border:2px solid #C0A000]{\lim_{n\to \infty}e^{1-C_n}=\sqrt{2\pi}}\tag 6$$


PUTTING IT TOGETHER TO ARRIVE AT STIRLING'S FORMULA

We are now able to write the factorial

$$n!=\sqrt{2\pi n}\left(\frac ne\right)^n e^{C_n-C} \tag 7$$

Since $C_n\to C$, then we have Stirling's Formula

$$\bbox[5px,border:2px solid #C0A000]{n!\sim \sqrt{2\pi n}\left(\frac ne\right)^n }$$

as was to be shown!


DERIVING BOUNDS FOR $\displaystyle n!$:

It is easy to see that since $\frac{1}{(m+1)^2}<\frac1{\xi_m^2}<\frac{1}{m^2}$, then

$$C-C_n<\frac1{12}\int_{n-1}^\infty \frac{1}{x^2}\,dx=\frac{1}{12(n-1)}$$

and

$$C-C_n>\frac1{12}\int_{n+1}^\infty \frac{1}{x^2}\,dx=\frac{1}{12(n+1)}$$

Next, for $x<1/2$ the Mean-Value Theorem guarantees that

$$1+x

Hence, we see that for $n\ge 3$

$$1+\frac{1}{12(n+1)}

Finally, using $(8)$ in $(7)$ yields the coveted bounds in $(1)$

$$\bbox[5px,border:2px solid #C0A000]{\sqrt{2\pi n}\left(\frac ne\right)^n\left(1+\frac{1}{12(n+1)}\right) \le n!\le \sqrt{2\pi n}\left(\frac ne\right)^n\left(1+\frac{1}{12(n-2)}\right)}$$

  • 0
    @FelixMarin Thank you.2017-03-16
  • 0
    This appears to be very elementary. +1 The proof I studied was based on Euler Maclaurin summation.2017-09-26
  • 0
    @ParamanandSingh Indeed. This development is quite straightforward, which is the reason for my posting it. And thank you for the up vote!2017-09-26
12

Here are a couple of "proofs" of Stirling's formula. They are quite elegant (in my opinion), but not rigorous. On could write down a real proof from these, but as they rely on some hidden machinery, the result would be quite heavy.

1) A probabilistic non-proof

We start from the expression $e^{-n} n^n/n!$, of which we want to find an equivalent. Let us fix $n$, and let $Y$ be a random variable with a Poisson distribution of parameter $n$. By definition, for any integer $k$, we have $\mathbb{P} (Y=k) = e^{-n} n^k/k!$. If we take $k=n$, we get $\mathbb{P} (Y=n) = e^{-n} n^n/n!$. The sum of $n$ independent random variables with a Poisson distribution of parameter $1$ has a Poisson distribution of parameter $n$; so let us take a sequence $(X_k)$ of i.i.d. random variables with a Poisson distribution of parameter $1$. Note that $\mathbb{E} (X_0) = 1$. We have:

$$\mathbb{P} \left( \sum_{k=0}^{n-1} (X_k - \mathbb{E} (X_k)) = 0 \right) = \frac{e^{-n} n^n}{n!}.$$

In other words, $e^{-n} n^n/n!$ is the probability that a centered random walk with Poissonnian steps of parameter $1$ is in $0$ at time $n$. We have tools to estimates such quantities, namely local central limit theorems. They assert that:

$$\frac{e^{-n} n^n}{n!} = \mathbb{P} \left( \sum_{k=0}^{n-1} (X_k - \mathbb{E} (X_k)) = 0 \right) \sim \frac{1}{\sqrt{2 \pi n \text{Var} (X_0)}},$$

a formula which is closely liked with Gauss integral and diffusion processes. Since the variance of $X_0$ is $1$, we get:

$$n! \sim \sqrt{2 \pi n} n^n e^{-n}.$$

The catch is of course that the local central limit theorems are in no way elementary results (except for the simple random walks, and that is if you already know Stirling's formula...). The methods I know to prove such results involve Tauberian theorems and residue analysis. In some way, this probabilistic stuff is a way to disguise more classical approaches (in my defense, if all you have is a hammer, everything looks like a nail).

I think one could get higher order terms for Stirling's formula by computing more precise asymptotics for Green's function in $0$, which requires the knowledge of higher moments for the Poisson distribution. Note that the generating function for a Poisson distribution of parameter $1$ is:

$$\mathbb{E} (e^{t X_0}) = e^{e^t-1},$$

and this "exponential of exponential" will appear again in a moment.

2) A generating functions non-proof

If you want to apply analytic methods to problems related to sequences, generating functions are a very useful tool. Alas, the series $\sum_{n \geq 0} n! z^n$ is not convergent for non-zero values of $z$. Instead, we shall work with:

$$e^z = \sum_{n \geq 0} \frac{z^n}{n!};$$

we are lucky, as this generating function is well-known. Let $\gamma$ be a simple loop around $0$ in the complex plane, oriented counter-clockwise. Let us fix some non-negative integer $n$. By Cauchy's formula,

$$\frac{1}{n!} = \frac{1}{n!} \frac{\text{d} e^z}{\text{d} z}_{|_0} = \frac{1}{2 \pi i} \oint_\gamma \frac{e^z}{z^{n+1}} \text{d} z.$$

We choose for $\gamma$ the circle of radius $n$ around $0$, with its natural parametrization $z (t) = n e^{it}$:

$$\frac{1}{n!} = \frac{1}{2 \pi i} \int_{- \pi}^{\pi} \frac{e^{n e^{it}}}{n^{n+1} e^{(n+1)it}} nie^{it} \text{d} t = \frac{e^n}{2 \pi n^n} \oint_{- \pi}^{\pi} e^{n (e^{it}-it-1)} \text{d} t = \frac{e^n}{2 \pi \sqrt{n} n^n} \int_{- \pi \sqrt{n}}^{\pi \sqrt{n}} e^{n \left(e^{\frac{i\theta}{\sqrt{n}}}-\frac{i\theta}{\sqrt{n}}-1\right)} \text{d} \theta,$$

where $\theta =t \sqrt{n}$. Hitherto, we have an exact formula; note that we meet again the "exponential of exponential". Now comes the leap of faith. For $x$ close to $0$, the value of $e^x-x-1$ is roughly $x^2/2$. Moreover, the bounds of the integral get close to $- \infty$ and $ \infty$. Hence, for large $n$, we have:

$$\frac{1}{n!} \sim \frac{e^n}{2 \pi \sqrt{n} n^n} \int_{- \infty}^{+ \infty} e^{\frac{n}{2} \left(\frac{i\theta}{\sqrt{n}}\right)^2} \text{d} \theta = \frac{e^n}{2 \pi \sqrt{n} n^n} \int_{- \infty}^{+ \infty} e^{-\frac{\theta^2}{2}} \text{d} \theta = \frac{e^n}{\sqrt{2 \pi n} n^n}.$$

Of course, it is not at all trivial to prove that the equivalents we took are rigorous. Indeed, if one apply this method to bad generating functions (e.g. $(1-z)^{-1}$), they can get false results. However, this can be done for some admissible functions, and the exponential is one of them.

I have learnt this method thanks to Don Zagier. It is also explained in Generatingfunctionology, Chapter $5$ (III), where the author credits Hayman. The original reference seems to be A generalisation of Stirling's formula (Hayman, 1956), but I can't read it now.

One of the advantage of this method is that it becomes very easy to get the next terms in the asymptotic expansion of $n!$. You just have to develop further the function $e^x-x-1$ at $0$. Another advantage is that it is quite general, as it can be applied to many other sequences.

  • 1
    I'd like to have a clarification on the following equation: $$\mathbb{P} \left( \sum_{k=0}^{n-1} (X_k - \mathbb{E} (X_k)) = 0 \right) \sim \frac{1}{\sqrt{2 \pi n \text{Var} (X_0)}}$$ Does it use an approximation of the standard normal cumulative distribution function? I can't see which formulation of the central limit theorem is used. Is it a De Moivre-Laplace like formula?2013-03-25
  • 2
    @ Allessandro Cuttin: No. It is an instance of a Local Central Limit Theorem. These are proved via specral methods, much like the standard proof of the CLT, but the machinery involved is usually heavier.2013-08-11
11

Instead of a proof, how about a string of hints? This comes from Maxwell Rosenlicht's Introduction to Analysis (a great little easy-to-read text which is dirt cheap -- it's a Dover paperback).

  • Chapter VI: Problem #22 Show that for $n=1,2,3,\dots$ we have that $$ 1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{n}-\ln(n)$$ is positive and it decreases as $n$ increases. Thus this converges to a number between 0 and 1 (Euler's constant).

  • Chapter VII: Problem #39 For $n=0,1,2,\dots$ let $I_n=\int_0^{\pi/2} \sin^n(x)\,dx$. Show that

(a) $\frac{d}{dx}\left(\cos(x)\sin^{n-1}(x)\right) = (n-1)\sin^{n-2}(x)-n\sin^n(x)$

(b) $I_n = \frac{n-1}{n}I_{n-2}$ if $n \geq 2$

(c) $I_{2n} = \frac{1\cdot 3\cdot 5\cdots (2n-1)}{2\cdot 4\cdot 6\cdots (2n)}\frac{\pi}{2}$ and $I_{2n+1} = \frac{2\cdot 4\cdot 6\cdots (2n)}{3\cdot 5\cdot 7\cdots (2n+1)}$ for $n=1,2,3,\dots$

(d) $I_0, I_1, I_2, \dots$ is a decreasing sequence having the limit zero and $$ \lim_{n \to \infty} \frac{I_{2n+1}}{I_{2n}}=1$$

(e) Wallis' product: $$\lim_{n \to \infty} \frac{2\cdot 2\cdot 4\cdot 4\cdot 6\cdot 6 \cdots (2n)\cdot (2n)}{1\cdot 3\cdot 3\cdot 5\cdot 5\cdot 7 \cdots (2n-1)\cdot (2n+1)}=\frac{\pi}{2}$$

  • Chapter VII: Problem #40

(a) Show that if $f:\{ x\in \mathbb{R}\;|\; x\geq 1\} \to \mathbb{R}$ is continuous, then $$ \sum\limits_{i=1}^n f(i) = \int_1^{n+1} f(x)\,dx + \sum\limits_{i=1}^n\left(f(i)-\int_i^{i+1} f(x)\,dx\right)$$

(b) Show that if $i>1$, then $\ln(i)-\int_i^{i+1}\,dx$ differs from $-1/2i$ by less than $1/6i^2$. [Hint: Work out the integral using the Taylor series for $\ln(1+x)$ at the point $0$.]

(c) Use part (a) with $f=\ln$, part (b), and Problem #22 from Chapter VI to prove that the following limit exists: $$\lim_{n \to \infty} \left(\ln(n!)-\left(n+\frac{1}{2}\right)\ln(n)+n\right)$$

(d) Use part (e) of Problem #39 to compute the above limit, then obtaining: $$ \lim_{n\to \infty} \frac{n!}{n^ne^{-n}\sqrt{2\pi n}}=1$$ (i.e. Stirling's Formula)

8

Depending on one's preferences, one might care that it is possible to understand Stirling's formula (perhaps really due to Laplace?) in a usefully more general context, namely, as an example of a "Laplace's method" or "stationary phase" sort of treatment of asymptotics of integrals.

This is available on-line on my page http://www.math.umn.edu/~garrett/m/v/, with the file being http://www.math.umn.edu/~garrett/m/v/basic_asymptotics.pdf

One might object to certain very-classical treatments which make Gamma appear as a singular thing. While I agree that it is of singular importance in applications within and without mathematics, the means of establishing its asymptotics are not.

5

If you're familiar with

$$\mathop {\lim }\limits_{n \to \infty } \frac{{\left( {2n} \right)!!}}{{\left( {2n - 1} \right)!!}}\frac{1}{{\sqrt n }} = \sqrt \pi $$

Then you can use

$$\eqalign{ & \mathop {\lim }\limits_{n \to \infty } \frac{{\left( {2n} \right)!{!^2}}}{{\left( {2n} \right)!}}\frac{1}{{\sqrt n }} = \sqrt \pi \cr & \mathop {\lim }\limits_{n \to \infty } \frac{{{2^{2n}}{{\left( {n!} \right)}^2}}}{{\left( {2n} \right)!}}\frac{1}{{\sqrt n }} = \sqrt \pi \cr} $$

Now you can check that

$$\alpha = \mathop {\lim }\limits_{n \to \infty } \frac{{n!{e^n}}}{{{n^n}\sqrt n }} = \mathop {\lim }\limits_{n \to \infty } \frac{{\left( {2n} \right)!{e^{2n}}}}{{{{\left( {2n} \right)}^{2n}}\sqrt {2n} }}$$

exists. Then square the first expression and divide by the latter to get

$$\alpha = \mathop {\lim }\limits_{n \to \infty } \frac{{{{\left( {n!} \right)}^2}{e^{2n}}}}{{{n^{2n}}n}}\frac{{{{\left( {2n} \right)}^{2n}}\sqrt {2n} }}{{\left( {2n} \right)!{e^{2n}}}} = \mathop {\lim }\limits_{n \to \infty } \frac{{{{\left( {n!} \right)}^2}{2^{2n}}\sqrt 2 }}{{\left( {2n} \right)!\sqrt n }} = \sqrt {2\pi } $$

Thus you have that

$$\mathop {\lim }\limits_{n \to \infty } \frac{{n!{e^n}}}{{{n^n}\sqrt {2n} }} = \sqrt \pi $$

or

$$n! \sim {n^n}{e^{ - n}}\sqrt {2\pi n} $$

5

The best derivation of Stirling's approximation I have seen starts from Euler's Summation Formula, valid for integers $a\leq b$ and $m\geq 1$ $$ \sum_{a\leq k

Here, $f^{(n)}(x)$ refers to the $n$-th derivative of $f$ evaluated at $x$, and $B_k$ is the $k$-th Bernoulli number.

The "remainder term" $(-1)^{m+1} \int_a^b \frac{B_m}{m!}f^{(m)}(x)\,dx$ lies between $0$ and the first discarded term.

Applying Euler's Summation Formula to $f(x) = \ln(x)$, and using $a=1, b=n$, we obtain $$ \sum_{1\leq k

Adding $\ln n$ to both sides, we get $$ \ln n! = n\ln n - n + \frac{\ln n}{2} + \ln \sqrt{2\pi} + \frac1{12n}-\frac1{360n^3} + \frac\epsilon{n^5} $$ with an error term having $0 \leq \epsilon \leq \frac1{1260}$.

Lastly, exponentiate and series expand the terms of order $\frac1n$ or less to get $$ n! = \sqrt{2\pi n}\left(\frac{n}{e}\right)^n \left( 1 + \frac1{12n} + \frac1{288n^2} - \frac{139}{51840n^3} -\frac{571}2488320 n^4 + \frac{\rho}{n^5}\right) $$ with $-0.00001 < \rho < 0.00078$

At this level of approximation, the approximation, rounded to the nearest integer, is exact for up to $n=11$ (of course, the relative error in the approximation falls as $\frac1{n^5}$ with a small coefficient, and so is extremely small.