2
$\begingroup$

I often perform calculations using endless decimals (I'm not sure what the proper word is), things like $\frac{1}{3}$, for example.

When I type the sum into a calculator, I just type a few digits, maybe four.

A random example: $ \frac{8*3.3333}{5} $

Is there a generally accepted standard for how far I need to go back to be fairly accurate (lets say 3 sig figs)?

If I were doing these seriously, (at present it's mostly for fun) I would of course do it properly.

(Please feel free to clear up the MathJax, (I've just never used it before) or even the tags, I'm not sure if I've got the right one.)

  • 1
    http://mathworld.wolfram.com/SignificantDigits.html They give the general rules for finding the number of significant digits when performing the basic arithmetic operations. From that, you just reverse-engineer to find the number of significant digits you need from the start.2012-02-20

2 Answers 2

7

No, there is no "generally accepted standard". How accurate one needs to be in a particular calculation depends on the purposes of that calculation.

Say you are trying to compute a distance. How accurate do you need to be? If you are trying to compute distance because you are trying to determine how much consumables a trip to Mars will need, it's probably okay for you to be within a few miles of the exact answer. If you are trying to compute distance because you are trying to determine what is the appropriate size of a screw you need to fix the motor in your car, then being within a few miles of the exact answer is not going to do you any good.

In other words: different problems have different tolerances; in fact, figuring out the size of an appropriate tolerance is an important issue when designing experiments, buildings, etc. It's not something that is "agreed upon" in full generality.

That said, let me tell you the same thing I tell my students:

Why use approximations when you can give an exact answer?

Why are you using an approximation to, say, $\frac{1}{3}$, when it takes just as much effort, if not less, to simply use $\frac{1}{3}$ directly (even in the calculator, where you would type (1/3), a mere five keystrokes; that's even less than what it takes to type your 0.3333, which takes six keystrokes)?

Instead of $\frac{8\times 3.3333}{5}$ you can do $\frac{8\times\frac{10}{3}}{5} = \frac{8\times 10}{3\times 5} = \frac{16}{3}$ and get an exact answer.

Every time you use an approximation instead of an exact value, you introduce an error. If you then use the approximation in further calculations, which in turn are approximated, then you are magnifying the error. Even a very good approximation of a very good approximation of a very good approximation can turn out to be a very bad approximation of the right answer.

Never use approximations if you don't have to. Don't be scared of fractions, radicals, or other expressions; use them! If you must use an approximation, always wait until the very last moment to use the approximation, not before.

  • 1
    Upvote for the last paragraph (even though it's as usual a great answer!). I hate my physics class teacher because he doesn't want results in exact form, say $\pi\sqrt{2}$, but in decimal form, $4,44288294$. Can't stand it.2012-02-20
3

The answer depends on what sort of subsequent computations you will be performing on such approximations. This is one of the central problems of the field of numerical analysis, understanding how roundoff errors propagate, numerical stability, well-conditioned functions, etc. For an introduction see the Wikipedia page.