1
$\begingroup$

A trigonometric function (such as sine or cosine), or some combination thereof, can be the solution of a first order differential equation with constant coefficients. But the solution of a higher order differential equation with non-constant coefficients is often a power series.

Why is that? Is it because the coefficients are themselves a function of the differentials? Or do the power series in question tend to converge toward simpler trigonometric functions?

  • 0
    @QiaochuYuan: This might have been addressed in the answer below; the more complicated equations have power series of complicated functions, while the ordinary equations can be solved by simple functions (that, of course, have power series).2012-09-02

3 Answers 3

2

For an initial value problem, say $\dfrac{d^n y}{dx^n} = F\left(x, y, \dfrac{dy}{dx}, \ldots, \dfrac{d^{n-1} y}{dx^{n-1}}\right)$ with $y(x_0) = y_0, \ldots, y^{(n-1)}(x_0) = y_{n-1} $, if $F$ is analytic in a neighbourhood of $(x_0, y_0, \ldots, y_{n-1})$, then there is a solution that is analytic in a neighbourhood of $x_0$. You can find arbitrarily many of the Taylor coefficients by writing $y = y_0 + c_1 (x - x_0) + c_2 (x - x_0)^2 + \ldots$, expanding $\dfrac{d^n y}{dx^n} - F\left(x, y, \dfrac{dy}{dx}, \ldots, \dfrac{d^{n-1} y}{dx^{n-1}}\right)$ in powers of $x$, and solving equations for each power of $x$. Moreover, in the case of a homogeneous linear equation with coefficients that are polynomials in $x$, you get a linear recurrence for the coefficients of the solution. In some cases you can solve that recurrence to get an explicit formula for the coefficients.

3

The power series converge to complicated special functions. For uncomplicated differential equations with uncomplicated power series we have names for the functions like "sin" "cos" "exp" etc. An example of a more complicated special function that is not as well known as cosine is http://en.wikipedia.org/wiki/Fox_H-function .

  • 0
    Welcome to the site. An upvote to get you going.2012-09-02
3

Here is the intuitive story behind Robert Israel's answer:

The following general principle has been invented by Newton: Any reasonable given or unknown function $x\mapsto f(x)$ defined in a neighborhood of $x=0$, in particular any "analytical expression" like, e.g., $e^x$, can be developed into a power series of the form $\sum_{k=0}^\infty a_k x^k$, where the $a_k$ are real (or complex) constants. This means that there is a $\rho>0$ such that $f(x)=\sum_{k=0}^\infty a_kx^k\qquad\bigl(|x|<\rho\bigr)\ .$ These power series behave in a simple way under addition and multiplication (particularly by polynomials in $x$), and above all, under differentiation. Even composition ("plugging" one series into an other) can be handled, however the computations get more involved in this case.

When an initial value problem $y'(x)=F\bigl(x,y(x)\bigr),\quad y(0)=y_0\qquad(*)$ is given, where the right side is some "analytical expression" $F(x,y)$ in the two variables $x$ and $y$, then only in rare cases it is possible to guess and subsequently verify some expression $x\mapsto y(x)$ that solves this problem. But it is always allowed to develop everything in sight into a power series in $x$, the unknown solution $y(x)$ with undetermined coefficients $a_k$, and in the majority of cases one then obtains a recursion formula for the $a_k$, with starting value $a_0=y_0$.

In the end one has the solution of $(*)$ in the form $y(x)=\sum_{k=0}^\infty a_kx^k$, and this series is hopefully convergent for all $x$ with $|x|<\rho$ for some $\rho>0$. In this way one has an at least numerically feasible hold on the solution, even though there may be no "elementary" expression for this solution available.

  • 1
    And even if the series is not convergent almost everywhere (such as $\sum n!x^n$), the series can sometimes be manipulated to give useful information.2012-09-03