6
$\begingroup$

Are there any other ways to demonstrate that $$\sin(x)=\sum_{k=0}^{\infty}\frac{(-1)^kx^{1+2k}}{(1+2k)!}$$

without using the definition of Taylor series of complex exponentials, and similarly for $\cos(x)$?

  • 2
    What about just plain-'ole Taylor series?2011-08-15
  • 2
    You can do this using [Frobenius method](http://en.wikipedia.org/wiki/Frobenius_method) for $y''(x)+y(x)=0$.2011-08-15
  • 2
    you can use the fact that $\sin$ is the solution to the second order differential equation $y''+y=0,~y(0)=0,~y'(0)=1$. If this second ODE admits a solution that is a series (not sure how to say that in english), then the coefficients of that series have to obey a certain recurrence relation you can easily find. you can then verify that the series thus defined converges everywhere, because the coefficients go to zero very fast, and by uniqueness of solutions to ODEs, you have $\sin=\sum\dots$.2011-08-15
  • 0
    @Olivier isn't it easier to argue that $\sin$ and $x \mapsto \sum_{n=0}^\infty \frac{(-1)^nx^{2n+1}}{(2n+1)!}$ satisfy the same ODE with the same initial conditions and thus must be identical by your favourite ODE solution uniqueness theorem? (Picard-Lindelöf immediately comes to mind). Obviously you'd also have to argue that the radius of convergence of the power series is infinite.2011-08-16
  • 2
    You will need a difinition for the sine function before you can prove anything about it. What is your definition?2011-08-16
  • 0
    @GEdgar: The unit circle definition I presume. I know it's not perfect because it assumes that arc length is well defined, but you can handwave that away until you get to integral calculus.2011-08-16
  • 0
    @kahen sure, but I wanted to explain why the coefficients are what they are, instead of just saying 'this series works too, so $\sin=$ this series'.2011-08-16

4 Answers 4

15

There's the way Euler did it. First recall that $$ \sin(\theta_1+\theta_2+\theta_3+\cdots) = \sum_{\text{odd }k \ge 1} (-1)^{(k-1)/2} \sum_{|A| = k}\ \prod_{i\in A} \sin\theta_i\prod_{i\not\in A} \cos\theta_i. $$ Then let $n$ be an infinitely large integer (that's how Euler phrased it, if I'm not mistaken) and let $$ x= \frac{\theta}{n} + \cdots + \frac{\theta}{n} $$ and apply the formula to find $\sin x$. Finally, recall that (as Euler would put it), since $\theta/n$ is infinitely small, $\sin(\theta/n) = \theta/n$ and $\cos(\theta/n) = 1$. Then do a bit of algebra and the series drops out.

The algebra will include things like saying that $$ \frac{n(n-1)(n-2)\cdots(n-k+1)}{n^k} = 1 $$ if $n$ is an infinite integer and $k$ is a finite integer.

  • 0
    Any chance you could demonstrate a sketch of this algebra you're talking about? Or at least some place I could find the proof?2011-08-15
  • 2
    First a bit about the trigonometric identities. One has $$\begin{align}\sin(\alpha + \beta + \gamma + \delta + \epsilon + \cdots) & = \underbrace{\sin\alpha}\;\underbrace{\cos\beta\cos\gamma\cos\delta\cos\epsilon \cdots} + \text{more terms with just one sine} \\ & = {} -\underbrace{\sin\alpha\sin\beta\sin\gamma}\;\underbrace{\cos\delta\cos\epsilon \cdots} {} + \text{other terms with three sines} \\ & = {} + \text{terms with five sines, etc.} \end{align} $$2011-08-15
  • 1
    Now imagine a term with three sines and the rest cosines: $$ \sin\alpha\sin\beta\sin\gamma\;\cos\delta\cos\varepsilon \cdots + \text{other terms with three sines} $$ $$ = \binom{n}{3}\sin(\theta/n)^2 \cos(\theta/n)^{n-3} $$ $$ = \frac{n(n-1)(n-2)}{6}\left(\frac{\theta}{n}\right)^3 $$ $$ = \frac{\theta^3}{6}. $$2011-08-15
  • 3
    Awesome (in the good proper original sense of the word)! = )2011-08-15
  • 0
    Um.... delete the "=" at the beginning of the second and third lines of the display in the "bit about trigonometric identities".2011-08-16
9

This is from Simmons' Calculus. It's in an exercise.

$$ \cos x \leq 1$$ $$ \int_0^x\!\cos t \,\mathrm{d}t\leq \int_0^x\! \,\mathrm{d}t$$ $$ \sin x \leq x$$ $$ \int_0^x\!\sin t \,\mathrm{d}t\leq \int_0^x\! t \,\mathrm{d}t$$ $$ \left.-\cos t\right|_0^x\leq \frac{ x^2}{2}$$ $$ 1-\cos x\leq \frac{ x^2}{2}$$ $$ \cos x\geq 1-\frac{ x^2}{2}$$

Continuing, you see that $\sin x$ is less than its expansion when truncated after progressively higher odd numbers of terms and, in alternation, that $\cos x$ is greater than its expansion truncated after progressively higher even numbers of terms.

I don't have the book in front of me. I think this was intended more to suggest the expansion than to rigorously prove it, but my theoretical understanding isn't quite up to identifying what's lacking or to correcting anything. Still, I thought it was interesting when I saw it and I hope it's relevant.

  • 3
    Good clear description of an approach that uses only basic ideas from the calculus.2011-08-16
7

Here is a mosquito-nuking solution: one can use Lagrangian inversion:

$$f^{(-1)}(x)=\sum_{k=0}^\infty \frac{x^{k+1}}{(k+1)!} \left(\left.\frac{\mathrm d^k}{\mathrm dt^k}\left(\frac{t}{f(t)}\right)^{k+1}\right|_{t=0}\right)$$

and let $f(t)=\arcsin\,t$; probably the only deal-breaker here is that the expressions for the derivatives get progressively unwieldy. However, if one takes limits as $t\to 0$ for these derivatives, one recovers the familiar sequence $1,0,-1,0,1,\dots$.


There is a version of Lagrange inversion that uses the coefficients of the original power series instead of the function itself. Mathematica natively supports this operation through the InverseSeries[] construction, but here is an implementation of one of the simpler algorithms for series reversion, due to Henry Thacher:

a = Rest[CoefficientList[Series[ArcSin[x], {x, 0, 20}], x]];
n = Length[a];
Do[
    Do[
      c[i, j + 1] = Sum[c[k, 1]c[i - k, j], {k, 1, i - j}];
      , {j, i - 1, 1, -1}];
    c[i, 1] = Boole[i == 1] - Sum[a[[j]] c[i, j], {j, 2, i}]
    , {i, n}];
Table[c[i, 1], {i, n}]

and then compare with the output of Rest[CoefficientList[Series[Sin[x], {x, 0, 20}], x]].

Other methods, including a modification of Newton's method for series, have been presented, but I won't get into them here.

  • 0
    Try it out in *Mathematica*: `Table[Limit[D[(x/ArcSin[x])^k, {x, k - 1}], x -> 0], {k, 10}]`2011-08-16
  • 0
    As you might be able to tell from the implementation I gave, the method is rather space-intensive; there are methods more parsimonious of space, but discussing them here would take us too far afoot.2011-08-24
  • 2
    Mosquito-nuking. Pleasant. +12011-08-24
4

We can start with the basic definition of $e$: $$ e=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^n $$ then raise $e$ to a real power $x$: $$ \begin{align} e^x&=\lim_{n\to\infty}\left(1+\frac{1}{n}\right)^{nx}\\ &=\lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n \end{align} $$ Next we can extend this to imaginary exponents: $$ e^{ix}=\lim_{n\to\infty}\left(1+\frac{ix}{n}\right)^n\tag{1} $$ One way to look at $(1)$ is using the Binomial Theorem to get a series for $e^{ix}$. $$ \begin{align} e^{ix} &=\lim_{n\to\infty}\left(1+\frac{ix}{n}\right)^n\\ &=\lim_{n\to\infty}\sum_{k=0}^n\binom{n}{k}\left(\frac{ix}{n}\right)^k\\ &=\lim_{n\to\infty}\sum_{k=0}^\infty\frac{P(n,k)}{n^k}\frac{(ix)^k}{k!}\\ &=\sum_{k=0}^\infty\frac{(ix)^k}{k!}\tag{2} \end{align} $$ Passing the limit inside the sum is legal because $\frac{P(n,k)}{n^k}\to 1$ monotonically, and because the final sum converges absolutely.

Another way to look at $(1)$ is using the geometry of complex numbers.

Recall that for a complex number, $z$, we have $$ \begin{align} \left|z^n\right|&=|z|^n\tag{3a}\\ \arg\left(z^n\right)&=n\arg(z)\tag{3b} \end{align} $$ Furthermore, recall that $$ \begin{align} \textstyle\left|1+\frac{ix}{n}\right|&=\textstyle\sqrt{1+\left(\frac{x}{n}\right)^2}\tag{4a}\\ \textstyle\arg\left(1+\frac{ix}{n}\right)&=\textstyle\tan^{-1}\left(\frac{x}{n}\right)\tag{4b} \end{align} $$ Using $\mathrm{(3a)}$ and $\mathrm{(4a)}$, we get $$ \begin{align} \left|e^{ix}\right| &=\left|\lim_{n\to\infty}\textstyle\left(1+\frac{ix}{n}\right)^n\right|\\ &=\lim_{n\to\infty}\textstyle\left(1+\left(\frac{x}{n}\right)^2\right)^\frac{n}{2}\\ &=\lim_{n\to\infty}\textstyle\left(1+\left(\frac{x}{n}\right)^2\right)^{n^2\frac{1}{2n}}\\ &=\lim_{n\to\infty}\textstyle\left(e^{x^2}\right)^\frac{1}{2n}\\ &=1\tag{5} \end{align} $$ Using $(3\mathrm{b})$ and $(4\mathrm{b})$, we get $$ \begin{align} \arg(e^{ix}) &=\arg\left(\lim_{n\to\infty}\textstyle\left(1+\frac{ix}{n}\right)^n\right)\\ &=\lim_{n\to\infty}\textstyle n\;\tan^{-1}\left(\frac{x}{n}\right)\\ &=x\;\lim_{n\to\infty}\textstyle\tan^{-1}\left(\frac{x}{n}\right)\left/\frac{x}{n}\right.\\ &=x\tag{6} \end{align} $$ Using $(5)$ and $(6)$, we see that $e^{ix}$ has length $1$ and argument $x$. Converting $e^{ix}$ to rectangular coordinates, we get $$ e^{ix}=\cos(x)+i\sin(x)\tag{7} $$ Comparing the real and imaginary parts of $(2)$ and $(7)$, we get the series for $\sin(x)$ and $\cos(x)$.