0
$\begingroup$

I have been looking at the proof of the existence of $e^x$ and its properties, and I understand the induction argument which yields the Taylor series expansion around $x=0$. For example,

$E_1(x)=1 + x$, $E_{n+1}(x)=1 + \int_0^x E_n(t)$, etc.

However, I wonder how this argument was developed informally before the proof. For example, how was $E_1(x)$, etc. chosen?

  • 2
    I guess you could have started with $E_0(x) = 1$ as well.2011-08-31
  • 0
    I haven't much wisdom on the history, but Hardy's Pure Mathematics does a pretty good job on the existence and properties of exponential and logarithmic functions. In fact I've not seen his treatment of the logarithm as a limit anywhere more recent (the modern treatment seems to be as integral of 1/x).2011-08-31
  • 0
    There is also the fact that $e^x$ is the inverse of logx.2011-08-31
  • 0
    Should this be tagged [math-history]?2011-08-31

3 Answers 3

6

One way of solving a differential equation of the form $$ \begin{align} &\frac{dy}{dx}=F(y(x)),\\ &y(0)=a, \end{align} $$ is to rewrite it in integral form $$ y(x)=a+\int_0^xF(y(u))\,du. $$ Here, we are solving for functions $y\colon\mathbb{R}^+\to\mathbb{R}$ and $F\colon\mathbb{R}\to\mathbb{R}$ is given. The integral form can be solved iteratively. First choose any (continuous) initial guess $y_0\colon\mathbb{R}^+\to\mathbb{R}$ then iteratively define $y_{n+1}(x)$ in terms of $y_n$ $$ y_{n+1}(x)=a+\int_0^xF(y_n(u))\,du. $$ This is method is known as Picard iteration, and is guaranteed to converge to the unique solution to the differential equation for a large class of functions $F$. For example, it always converges if $F$ is Lipschitz continuous.

The exponential function $y(x)=\exp(x)$ satisfies $\frac{dy}{dx}=y$ and $y(0)=1$. This differential equation can be solved by Picard iteration by taking $F(y)=y$ and using $y_0=0$ or $y_0=1$ gives the iteration described in the question.

  • 0
    Thank you. +1. How then from the taylor series was it determine that it was the exponential function and that it specifically was of the form e^x? Was e already known or did this help in its discovery?2011-08-31
  • 0
    @analysisjb: Sorry, I don't know in what order the various arguments were developed historically. This answer is just really saying that the sequence you state is a natural way of finding exp(x) if you start from the differential equation. No doubt Picard iteration for general differential equations came later. Showing that it converges to exp(x) rather depends on how you define exp. It is guaranteed to converge to the unique solution of $dy(x)/dx=y(x)$. The fact that $y(x_1+x_2)=y(x_1)y(x_2)$ follows from linearity. Solutions for different initial conditions are obtained by scaling.2011-08-31
  • 0
    But e has been known for a long time so, almost certainly, it was already known. For a very short history, see Wikipedia (http://en.wikipedia.org/wiki/E_%28mathematical_constant%29#History). e has its roots in logarithm tables.2011-08-31
  • 0
    But, as mentioned in the Wikipedia link, e was first used explicitly by Leibniz. As he developed differential/integral calculus, I suppose that he could have used an argument like this (non-rigorously).2011-08-31
0

I like starting with the functional equation $f(x+y)=f(x)f(y)$. From this, assuming differentiability, we can show that $f'(x) = f'(0)f(x)$. ($f(x+h)-f(x) = f(x)f(h)-f(x) = f(x)(f(h)-1)$ so $(f(x+h)-f(x))/h = f(x)(f(h)-1)/h$, and let $h \rightarrow 0$)

This also works for the log, inverse tan, and other functions.

Once you have the differential equation, proceed as usual.

Of course, I claim no originality for this - I just like it.

0

For example, how was $E_1(x)$, etc. chosen?

As long as $E_1(x)$ has the correct value at $x=0$, the iteration will converge to the same target.