3
$\begingroup$

Was in class where we look at both and notice that there is a difference in the error, but we didn't go into why. The other method used the taylor expansion $$e^x=1+x+\frac{x^2}{2!}+\frac{x^3}{3!}+...$$

Why does using the identity $$e^x=\frac{1}{e^{-x}}$$ work better for negative numbers?

To try and clear up what I'm asking, we coded a program to graph the error of the taylor series expansion of $e^x$ to n terms. We then coded another one to use the identity mentioned above in the expansion and noticed that it worked better for negative numbers, why is that the case?

For comparison, we were comparing the the absolute fractional error of the sums $$\frac{T(x,N)-e^x}{e^x}$$ for each method (with the identity and without). Where $T(x,N)$ is the N-th order taylor series expansion of $e^x$. We plotted the error against the order of expansion (number of terms considered in the sum). We evaluated various numbers, and saw that without using the identity, the error was higher for negative numbers.

  • 1
    Why was this downvoted?2017-01-29
  • 1
    @parsiad based on the closed-vote, it seems that someone found it unclear what exactly is being asked. I would agree that it's not clear what the asker means.2017-01-29
  • 1
    @Citut when you say $1/e^{-x}$ works better *"for negative numbers"*, do you mean $1/e^{-x}$ works better *"for **negative values** of $x$"*?2017-01-29
  • 1
    @Citut also, could you clarify what "works better" is supposed to mean (or what you think it should mean)? Do you mean that the series for $1/e^{-x}$ is supposed to *converge more quickly*?2017-01-29
  • 1
    @Omnomnomnom it is pretty obvious that "converge quicklier" is exactly what OP meant.2017-01-29
  • 1
    @Wolfram first of all, "quicklier" is not a word. "More quickly" and "faster" work there. Second, it is *not* obvious, at least to me. There are a whole range of things that determine how "good" a numerical method might be that don't necessary have to do with speed of convergence. My *best guess* is certainly that OP is talking about speed of convergence, but based on down-votes and closed-votes (not mine by the way), OP should definitely be explicit about that.2017-01-29
  • 0
    @Omnomnomnom I think that's what I meant? Our instructor just said the second method works better for negative numbers, so I'm not sure why that is the case.2017-01-29
  • 1
    @Citut did he use any phrase besides "works better"? The more precise you could be, the better. Maybe it would help if you explained exactly what you were doing in class. I still think you should clarify what "evaluating negative numbers" is supposed to mean, too.2017-01-29
  • 0
    @Omnomnomnom The work of mathematician is often not only to answer the rigorously formulated question, but also to make some vague question rigorous beforehand. Of course, it is better for OP to make this work himself, but it is not always easy. And I'm not native English speaker, so I'm sorry if "quicklier" seems weird, will not use that word anymore, thanks.2017-01-29
  • 0
    @Omnomnomnom I posted one more update in the original post. Hopefully it's more clear now.2017-01-29

2 Answers 2

3

Intuitive answer: when we sum the series for positive up to $x^n/n!$, the order of absolute error margin is roughly the module of the next term $|x^{n+1}/(n+1)!|$ of the series, because each next term is much less then the previous for large enough $n$. So for $e^{-x}$ and $e^x$ the absolute error margin is roughly the same if we add up to the $n$th term, because these modules coincide for opposite numbers. However, for $x<0$ $e^xrelative error margin is much higher for $e^x$, than for $e^{-x}$. However, if we calculate $e^x$ as $1/e^{-x}$ than the relative margin is the same as of $e^{-x}$, and thus is better.

0

I know it's really late, but since there is no right answer I'll answer it myself.

First thing you need to know is what is called "Machine Epsilon". When a computer represents a number in floating point (32 or 64 bits) let's say the number 1, there is a slight margin of error ($2^{-23}$ for 32 bit and $2^{-52}$ for 64 bit). That is what we call $\epsilon$.

If we add/substract a number $x$ < $\epsilon$ to another number $y$ our machine will compute it as $y$, not considering the number $x$ since it is < $\epsilon$.

The Taylor series $e^x$ for $x$ < 0, the factors of the polynomial look like this:
$$e^X = -f_1 + f_2 - f_3 + f_4 \pm\cdots \pm f_n$$ As you see we keep alternating between positive and negative factors. When the order gets bigger the new factors get smaller and more precise so there's room for adding or substracting numbers around $\epsilon$ thus creating unwanted error.

If we compute $e^x$ for $x$ < 0 like $\frac{1}{e^{|x|}}$ all the factors of the Taylor series are positive and we avoid $\epsilon$ errors. Thus, the method converges in less iterations.

PD: I just created an account to answer this I hope it's understandable and answered your question :)