107
$\begingroup$

I was wondering the following. And I probably know the answer already: NO.

Is there another number with similar properties as $e$? So that the derivative of $\exp(x)$ is the same as the function itself.

I can guess that it's probably not, because otherwise $e$ wouldn't be that special, but is there a proof of it?

  • 8
    One of the many definitions of $\exp(x)$ is that it is the unique function such that $f'(x) = f(x)$ and $f(0) = 1$2011-08-17
  • 4
    "the derivative of that number" is always zero...2011-08-17
  • 28
    To see this, suppose f(x) also has this property. Take the derivative of $e^{-x}f(x)$ to show this function must be constant.2011-08-17
  • 1
    @Sivaram I understand that's one of the properties of $e^x$, but I was wondering if there are any other numbers that has that property. I know there are not. But if people are so certain there's not another number that has that property, HOW do they know that.2011-08-17
  • 13
    @Timo You keep repeating the word "number" where you mean "function". The derivative of *any* number (even $e$) is zero. The derivative can be though of as the rate of change with respect to $x$ (or whatever you want to name your variable). If there is no $x$, there is no change.2011-08-17
  • 0
    Austin, you're right. I've changed my question :)2011-08-17
  • 10
    By counterexample, doesn't the function $0(x)$ always fulfill $f(x) = f'(x)$?2011-08-17
  • 11
    @Peter - Yes, $0(x)$ always fulfills $f(x)=f'(x)$. However, that is not a counterexample, because $0(x)$ is a function of the form $Ce^x$, as described in the accepted answer. Perhaps the question and title should be edited to propose a proof that $Ce^x$ us the only function for which $f(x) = f'(x)$?2011-08-18
  • 0
    see another answer http://math.stackexchange.com/a/1292586/720312016-05-23

9 Answers 9

175

Of course $C e^x$ has the same property for any $C$ (including $C = 0$). But these are the only ones.

Proposition: Let $f : \mathbb{R} \to \mathbb{R}$ be a differentiable function such that $f(0) = 1$ and $f'(x) = f(x)$. Then it must be the case that $f = e^x$.

Proof. Let $g(x) = f(x) e^{-x}$. Then

$$g'(x) = -f(x) e^{-x} + f'(x) e^{-x} = (f'(x) - f(x)) e^{-x} = 0$$

by assumption, so $g$ is constant. But $g(0) = 1$, so $g(x) = 1$ identically.

N.B. Note that it is also true that $e^{x+c}$ has the same property for any $c$. Thus there exists a function $g(c)$ such that $e^{x+c} = g(c) e^x = e^c g(x)$, and setting $c = 0$, then $x = 0$, we conclude that $g(c) = e^c$, hence $e^{x+c} = e^x e^c$.

This observation generalizes to any differential equation with translation symmetry. Apply it to the differential equation $f''(x) + f(x) = 0$ and you get the angle addition formulas for sine and cosine.

  • 1
    Whooow, cool trick, thanks for clearing this up :) It took me some time to understand it though xD The question is kinda harder then I expected :P But thanks though :D $e$ is a wonderful number.2011-08-17
  • 3
    When I read the question, I thought: one must explain how the Mean Value Theorem enters this argument. But this argument does it quite nicely while keeping that in the background.2011-08-17
  • 7
    I like this answer, but I think the proof lacks some detail which would make it clearer, and the statement of the theorem lacks the constant which you mentioned initially. I would add that, since g is constant, and g(x)=f(x)e^(-x) which implies then we have that f(x)=g(x)*e^x, we have that for any constant C, we have some f(x)=Ce^x. Thus, for all possibilities for f we have f=Ce^x.2011-08-18
  • 1
    @Qia Do you think that the translational symmetry is more fundamental than the uniqueness theorem? (cf. my answer) To me, it is the latter that is the essence of the matter in general, not the former.2011-08-19
  • 1
    @Bill: I think that the translational symmetry is underemphasized. Of course the uniqueness theorem is also necessary to conclude that the space of solutions has the expected dimension in general.2011-08-19
  • 1
    @Qia You might want to say something explicitly about the uniqueness theorem so that others don't misread your intent - as did I.2011-08-19
  • 2
    This is me again. I've helped you twice to earn gold badge for Great Answer. Congrats! ♥(ˆ⌣ˆԅ)2014-09-29
44

Let $f(x)$ be a differentiable function such that $f'(x)=f(x)$. This implies that the $k$-th derivative, $f^{(k)}(x)$, is also equal to $f(x)$. In particular, $f(x)$ is $C^\infty$ and we can write a Taylor expansion for $f$:

$$T_f(x) = \sum_{k=0}^\infty c_k x^k.$$

Notice that the fact that $f(x)=f^{(k)}(x)$, for all $k\geq 0$, implies that the Taylor series $T_f(x_0)$ converges to $f(x_0)$ for every $x_0\in \mathbb{R}$ (more on this later), so we may write $f(x)=T_f(x)$. Since $f'(x) = \sum_{k=0} (k+1)c_{k+1}x^k = f(x)$, we conclude that $c_{k+1} = c_k/(k+1)$. The value of $c_0 = f(0)$, and therefore, $c_k = f(0)/k!$ for all $k\geq 0$. Hence:

$$f(x) = f(0) \sum_{k=0}^\infty \frac{x^k}{k!} = f(0) e^x,$$

as desired.

Addendum: About the convergence of the Taylor series. Let us use Taylor's remainder theorem to show that the Taylor series for $f(x)$ centered at $x=0$, denoted by $T_f(x)$, converges to $f(x)$ for all $x\in\mathbb{R}$. Let $T_{f,n}(x)$ be the $n$th Taylor polynomial for $f(x)$, also centered at $x=0$. By Taylor's theorem, we know that $$|R_n(x_0)|\leq |f^{(n+1)}(\xi)|\frac{ |x_0 - 0|^{n+1}}{(n+1)!},$$ where $R_n(x_0)=f(x) - T_{f,n}(x)$ and $\xi$ is a number between $0$ and $x_0$. Let $M=M(x_0)$ be the maximum value of $|f(x)|$ in the interval $I=[-|x_0|,|x_0|]$, which exists because $f$ is differentiable (therefore, continuous) in $I$. Since $f(x)=f^{(n+1)}(x)$, for all $n\geq 0$, we have: $$|R_n(x_0)|\leq |f^{(n+1)}(\xi)|\frac{ |x_0|^{n+1}}{(n+1)!}\leq |f(\xi)|\frac{ |x_0|^{n+1}}{(n+1)!}\leq M \frac{|x_0|^{n+1}}{(n+1)!} \longrightarrow 0 \ \text{ as } \ n\to \infty.$$ The limit goes to $0$ because $M$ is a constant (once $x_0$ is fixed) and $A^n/n! \to 0$ for all $A\geq 0$. Therefore, $T_{f,n}(x_0) \to f(x_0)$ as $n\to \infty$ and, by definition, this means that $T_f(x_0)$ converges to $f(x_0)$.

  • 7
    You left out one step, which is to show the power series converges to $f(x)$ (which is not true for all $C^{\infty}$ functions). In the case at hand you can use the formulas for the remainder term of a finite Taylor expansion to get convergence.2011-08-17
  • 1
    Yes, thanks. I'll come back in a bit and add it to the answer.2011-08-17
  • 3
    +1 for submitting this answer while this may not get as many upvotes as Yuan's simple answer.2011-08-19
28

Yet another way: By the chain rule, ${\displaystyle {d \over dx} \ln|f(x)| = {f'(x) \over f(x)} = 1}$. Integrating, you get $\ln |f(x)| = x + C$. Taking $e$ to both sides, you obtain $|f(x)| = e^{x + C} = C'e^x$, where $C' > 0$. As a result, $f(x) = C''e^x$, where $C''$ is an arbitrary constant.

If you are worried about $f(x)$ being zero, the above shows $f(x)$ is of the form $C''e^x$ on any interval for which $f(x)$ is nonzero. Since $f(x)$ is continuous, this implies $f(x)$ is always of that form, unless $f(x)$ is identically zero (in which case we can just take $C'' = 0$ anyhow).

  • 1
    Wow, I like this. It's so simple :D2017-01-29
25

Hint $\rm\displaystyle\:\ \begin{align} f{\:'}\!\! &=\ \rm a\ f \\ \rm \:\ g'\!\! &=\ \rm a\ g \end{align} \iff \dfrac{f{\:'}}f = \dfrac{g'}g \iff \bigg(\!\!\dfrac{f}g\bigg)' =\ 0\ \iff W(f,g) = 0\:,\ \ W = $ Wronskian

This is a very special case of the uniqueness theorem for linear differential equations, esp. how the Wronskian serves to measure linear independence of solutions. See here for a proof of the less trivial second-order case (that generalizes to n'th order). See also the classical result below on Wronskians and linear dependence from one of my old sci.math posts.

Theorem $\ \ $ Suppose $\rm\:f_1,\ldots,f_n\:$ are $\rm\:n-1\:$ times differentiable on interval $\rm\:I\subset \mathbb R\:$ and suppose they have Wronskian $\rm\: W(f_1,\ldots,f_n)\:$ vanishing at all points in $\rm\:I\:.\:$ Then $\rm\:f_1,\ldots,f_n\:$ are linearly dependent on some subinterval of $\rm\:I\:.$

Proof $\ $ We employ the following easily proved Wronskian identity:

$\rm\qquad\ W(g\ f_1,\ldots,\:g\ f_n)\ =\ g^n\ W(f_1,\ldots,f_n)\:.\ $ This immediately implies

$\rm\qquad\quad\ \ \ W(f_1,\ldots,\: f_n)\ =\ f_1^{\:n}\ W((f_2/f_1)',\ldots,\:(f_n/f_1)'\:)\quad $ if $\rm\:\ f_1 \ne 0 $

Proceed by induction on $\rm\:n\:.\:$ The Theorem is clearly true if $\rm\:n = 1\:.\:$ Suppose that $\rm\: n > 1\:$ and $\rm\:W(f_1,\ldots,f_n) = 0\:$ for all $\rm\:x\in I.\:$ If $\rm\:f_1 = 0\:$ throughout $\rm\:I\:$ then $\rm\: f_1,\ldots,f_n\:$ are dependent on $\rm\:I.\:$ Else $\rm\:f_1\:$ is nonzero at some point of $\rm\:I\:$ so also throughout some subinterval $\rm\:J \subset I\:,\:$ since $\rm\:f_1\:$ is continuous (being differentiable by hypothesis). By above $\rm\:W((f_2/f_1)',\ldots,(f_n/f_1)'\:)\: =\: 0\:$ throughout $\rm\:J,\:$ so by induction there exists a subinterval $\rm\:K \subset J\:$ where the arguments of the Wronskian are linearly dependent, i.e.

on $\rm\ K:\quad\ \ \ c_2\ (f_2/f_1)' +\:\cdots\:+ c_n\ (f_n/f_1)'\: =\ 0,\ \ $ all $\rm\:c_i'\:=\ 0\:,\ $ some $\rm\:c_j\ne 0 $

$\rm\qquad\qquad\: \Rightarrow\:\ \ ((c_2\ f_2 +\:\cdots\: + c_n\ f_n)/f_1)'\: =\ 0\ \ $ via $({\phantom m})'\:$ linear

$\rm\qquad\qquad\: \Rightarrow\quad\ \ c_2\ f_2 +\:\cdots\: + c_n\ f_n\ =\ c_1 f_1\ \ $ for some $\rm\:c_1,\ c_1'\: =\: 0 $

Therefore $\rm\ f_1,\ldots,f_n\:$ are linearly dependent on $\rm\:K \subset I\:.\qquad$ QED

This theorem has as immediate corollaries the well-known results that the vanishing of the Wronskian on an interval $\rm\: I\:$ is a necessary and sufficient condition for linear dependence of

$\rm\quad (1)\ $ functions analytic on $\rm\: I\:$
$\rm\quad (2)\ $ functions satisfying a monic homogeneous linear differential equation
$\rm\quad\phantom{(2)}\ $ whose coefficients are continuous throughout $\rm\: I\:.\:$

  • 5
    Careful not to divide by zero.2011-08-17
  • 8
    @Mark My hints often ignore degenerate cases to focus on the essence of the matter.2011-08-17
  • 9
    Since first-year calculus students usually haven't heard of the Wronskian, why not just say $(f/g)' = 0$ and therefore $f/g$ is constant?2011-08-17
  • 3
    @Mic I presumed the "is constant" inference was clear. The point of mentioning the Wronskian is to emphasize that this is a special case of more general results (for those who may know such). On Wronskians see also [my sci.math post here.](http://groups.google.com/groups?selm=y8zfznjojwo.fsf@nestle.ai.mit.edu)2011-08-17
14

Here is a different take on the question. There is a whole spectrum of different discrete "calculi" which converge to the continuous case, each of which has it's special "$e$".

Pick some $t>0$. Consider the equation $$f(x)=\frac{f(x+t)-f(x)}{t}$$ It is not hard to show by induction that there is a function $C_t:[0,t)\to \mathbb{R}$ so that $$f(x)=C_t(\{\frac{x}{t}\})(1+t)^{\lfloor\frac{x}{t}\rfloor}$$ where $\{\cdot\}$ and $\lfloor\cdot\rfloor$ denote fractional and integer part, respectively. If I take Qiaochu's answer for comparison, then $C_t$ plays the role of the constant $C$ and $(1+t)^{\lfloor\frac{x}{t}\rfloor}$ the role of $e^x$. Therefore for such a discrete calculus the right value of "$e$" is $(1+t)^{1/t}$. Now it is clear that as $t\to 0$ the equation becomes $f(x)=f'(x)$, and $(1+t)^{1/t}\to e$.

  • 0
    I'd like some clarification after "it is not hard to show..." Is $C_t$ a special notation?2012-03-19
  • 1
    @Peter: it's just a function Gjergji defined for convenience.2012-03-24
  • 2
    @J.M. The definition is kind of a leap. I'm interested in that leap.2012-03-24
  • 1
    Very cool... falemenderit.2013-04-29
12

The solutions of $f(x) = f'(x)$ are exactly $f(x) = f(0) e^x$. But you can also write it as $b a^x$, if you want a different basis. Then $f'(x) = b \log(a) a^x$, and so if you want $f'=f$ you need $\log(a)=1$ and $a=e$ (except for the trivial case $b=0$).

11

The proof they use at High School, so not as deep or instructive, but it doesn't require as much knowledge.

$$\begin{eqnarray*} \frac{dy}{dx} &=& y\\ \frac{dx}{dy} &=& \frac 1 y\\ x &=& \log |y| + C\\ y &=& A\exp(x) \end{eqnarray*} $$

  • 2
    Why can you switch from $\frac{dy}{dx}=y$ to $\frac{dx}{dy}=\frac1y$? Last I recall they are not teaching non-standard analysis in high school.2012-07-02
  • 3
    In the British high school system, the ability to switch between dy/dx and dx/dy is a compulsory part of the math curriculum (in particular it's in "Core 3", a module taken in the final year i.e. when students are 17-18, and the 3rd of 4 compulsory pure mathematics modules). It's not a piece of non-standard analysis. However, this is a High School Course, and no formal real analysis is taugh that justifies the result.2012-07-03
  • 0
    The treatment of differentials as though they were fractions is *not* correct in standard analysis, it can be made rigorously true where non-standard analysis is involved though. In mathematics (academic one) the use of unproved claims is highly dubious and one should avoid doing that. Even if you were taught something in high school it does not mean that it is actually true, often low-level mathematics is being coarsely cut and shaved to fit "easy going patterns and algorithms" rather than building towards deep understanding of the mathematical nature of the process.2012-07-03
  • 1
    British students are _not_ taught to treat dy/dx as a fraction! They are _explicitly_ warned against doing so! And it has _not_ been treated as a fraction here! NB neither standard nor non-standard analysis is taught at High School - since most students who'll use calculus will be doing physics or engineering undergrad, not math. However since this is a question often asked by High Schoolers, I wanted to give an explanation using only High School methods. In Core 3, British students are expected to solve questions like $dy/dx=3y\sqrt x$ so $dy/dx=y$ should be an "easy" question for them.2012-07-03
  • 0
    I didn't justify any steps as I was trying to show how High School students can "solve" $dy/dx=y$ as an exam question, and the steps are just applications of basic High School methods. Teachers will justify, or at least motivate, methods as they are taught, but without real analysis, and students don't regurgitate proofs in exams. The final step is fairly straightforward algebra, the penultimate step requires $\int \! 1/x \, \mathrm{d} x$ (used but rarely explained at High School) and the 1st step can be explained by "chain rule", $dy/dx = dy/du \times du/dx$ - you find $dy/dx \times dx/dy =1$2012-07-03
  • 1
    @AsafKaragila: Dear Asaf, The formula that Just uses is standard, and has many justifications that have nothing to do with non-standard analysis. It presupposes that the relationship $y = y(x)$ is invertible, so that we can write $x$ as a function of $y$, but then again the formula only makes sense when $dy/dx$ is non-zero, in which case we know (inverse function theorem, if you like) that indeed $y = y(x)$ *is* so invertible. As with Bill's answer, this one has complications if we allow for the possibility *a priori* that $y(x)$ vanishes at some values of $x$. (One advantage of ...2012-07-03
  • 0
    Qiaochu's answer is that it exploits the existence of the function $e^x$ which is *known* to be non-zero everywhere, and avoids trying to directly deduce this for the unknown function.) Regards,2012-07-03
  • 0
    This method does fail to find the solution $y=0$ which is an indication of its lack of rigor! (More formally you'd need to split it into cases before inverting; if there is one place where $y=0$ it is intuitively clear that $y=0$ everywhere, but one has to be careful, since e.g. $y=x^2$ has $y=0$ and $dy/dx=0$ at $x=0$ without vanishing everywhere!) Nevertheless this approach seems to be the only one so far that does not rely on knowing the answer before setting out the strategy. Effectively, it's just treating $dy/dx=y$ as a separable differential equation & applying the standard techniques.2012-07-04
3

Let $x \in C^1$ on the whole line be a solution to $\dot{x}(t) = x(t)$, $x(0) = 1$. Using the Taylor expansion with remainder, show that necessarily $x(t) = e^t$.

We have that $\dot{x} = x$ implies $x^{(n)} = x^{(n-1)}$ for all $n \ge 1$, and by induction on $n$, we have that $x(t)$ is $C^\infty$ with $x^{(n)} = x$ for all $n$. Thus, if $x(0) = 1$ and $\dot{x} = x$, Taylor's Theorem gives$$x(t) = \left( \sum_{k=0}^{N-1} {{t^k}\over{k!}}\right) + {{x^{(N)}(t_1)}\over{N!}}t^N,$$for $t_1$ between $0$ and $t$. But $x^{(N)} = x$, so if$$M = \max_{|t_1| \le |t|} |x(t)|,$$which we know exist by compactness of $[-|t|, |t|]$, then$$\left| x(t) - \sum_{k=0}^{N-1} {{t^k}\over{k!}}\right| < {{Mt^N}\over{N!}}.$$The right-hand side heads to $0$ as $N \to \infty$, so the series for $e^t$ converges to $x(t)$.

-2

Note that $e$ is defined by the following Limit: $e=\lim_{n \rightarrow \infty}(1+ \frac{1}{n})^n$. Then: $e^x=\lim_{n \rightarrow \infty}(1+ \frac{1}{n})^{nx}$. Applying the Definition of the derivative $f'(x) = \lim_{h \rightarrow 0} \frac{f(x+h)-f(x)}{h}$ one obtains: $(e^x)'=\lim_{h \rightarrow 0} \frac{ \lim_{n \rightarrow \infty}((1+ \frac{1}{n})^{n(x+h)}-(1+ \frac{1}{n})^{nx})}{h} = \lim_{h \rightarrow 0}( \lim_{n \rightarrow \infty}(1+\frac{1}{n})^{nx} \lim_{n \rightarrow \infty}(\frac{(1+\frac{1}{n})^{nh}-1}{h}))$

$= e^x \lim_{h \rightarrow 0} \lim_{n \rightarrow \infty}(\frac{(1+\frac{1}{n})^{nh}-1}{h})$.

Now one can replace $h$ by $n$ by the relation $h= \frac{C}{n}$ with a finite constant $C$, because if $n \rightarrow \infty$ then $h$ tends to Zero. Hence:

$\lim_{h \rightarrow 0} \lim_{n \rightarrow \infty}(\frac{(1+\frac{1}{n})^{nh}-1}{h}) = \lim_{h \rightarrow 0} (\frac{(1+\frac{h}{C})^{C}-1}{h}) = \lim_{h \rightarrow 0} (\frac{(1+C \frac{h}{C} + \frac{C(C-1)}{2}(\frac{h}{C})^2+O(h^3)-1}{h}) = \lim_{h \rightarrow 0} (1 + \frac{C(C-1)}{2}\frac{h}{C^2}+O(h^2)) = 1$

Therefore $(Ce^x)'=C(e^x)'=Ce^x$.

q.e.d.

  • 0
    Late to the party but all this proves is that $\left(Ce^x\right)'=Ce^x$, not that $f(x)=f'(x)\iff f(x)=Ce^x$.2016-09-07