A circle has an exact, finite diameter, circumference, and area. If $\pi$ is only a "mathematical constant" that is derived independently of the parts of a circle, and is, indeed, an irrational number, then is it's use in the equation C= 2 $\pi$ R appropriate? Perhaps it should have an "almost equal sign" instead.
The Nature of $\pi$
-
0Is this about $\pi$ or irrational numbers in general? – 2017-01-19
-
1You can't really have exact measurements on those quantities. The problem is how accurately you can measure. This problem bothered the Greeks so much they banned square roots. – 2017-01-19
-
2Simply as a cultural phenomenon, we tend to view decimal expansions as a fundamental part of our numbers. This isn't a mathematical view, nor a historical one (decimal is certainly not the only base that's ever been used, much less the first, and I would strongly suspect that arithmetic tends to predate the $b$-ary representation of numbers). What you're suggesting is not a problem about $\pi$ so much as our approximations of it for computational purposes. – 2017-01-19
-
0@AJY: Right on. Decimal expansions are about as naturally fundamental as the fact that we have ten fingers—that is to say, not fundamental at all. – 2017-01-19
-
1$\pi$ will always be exactly $\pi$, and the math knows this. It's our computers that don't. However, it's a buzz fact that 11 decimal places are all you'll ever need, and most calculators store 15. – 2017-01-19
-
0$\pi$ is used in this context to describe a real-world situation (if one considers the concept of "circle" real), just using an irrational number. While the term "irrational" may connote illogical or unreasonable entities, $\pi$ quite satisfactorily deals with circles. Even if the five millionth digit is inaccurate in a calculated result, I don't know if we could ever measure the resulting error (in reality). – 2017-01-19
-
3A perfect circle, whose area would be $\pi r^2$ precisely, is no more tangible and real as $\pi$ itself. – 2017-01-19
3 Answers
Simply as a cultural phenomenon, we tend to view decimal expansions as a fundamental part of our numbers. This isn't a mathematical view, nor a historical one (decimal is certainly not the only base that's ever been used, much less the first, and I would strongly suspect that arithmetic tends to predate the $b$-ary representation of numbers). What you're suggesting is not a problem about $\pi$ so much as our approximations of it for computational purposes.
The more fundamental issue at work in your question is that $\pi$ is irrational, that is it cannot be written as $p / q$ for integers $p, q$. The decimal expansion provides a sequence of rational approximations of a number (writing a number out to $n$ digits past the decimal point gives an approximation that's $< 10^{-n}$ from the number approximated). However, we like to compute with rational numbers, so when we compute values for something like $\pi + e$, we usually seek a sufficiently good rational approximation. Oftentimes we like decimal, or more general $b$-ary approximations (it's typically very easy to do arithmetic this way), as you hinted at. But the same can be said of $1/3$. I might truncate it as $0.33333$, or maybe as $0.3333333333333333333333333333333$. And if I wanna compute $2 \times 1/3$, these will give me different values. Do we have a crisis? No, because I understand there's a basic difference between $1/3$ and a decimal approximation of it. By the same token, we don't have a crisis if computing $2 \times 3.1 < 2 \times 3.14 < 2 \times 3.141 < \cdots$ gives us different "values" of $\pi$, because we understand that $\pi$ is different from $3.1, 3.14, 3.141, \ldots$. However, we can content ourselves by saying that in computation, I can do arithmetic with $\pi$ to arbitrary precision by writing out enough digits.
In our everyday handling of numbers we use integers, fractions, and finite decimal fractions. We are inclined to accept $9.876:5.4321=1.81808$, even though deep inside we know that this is not true mathematically. Now there are numbers, like $\sqrt{2}$, $\pi$, and $e$, that have a precise mathematical definition but no finite representation as fractions, let alone decimal fractions. This is a mathematical fact that you have to accept.
Fortunately the standard operations with numbers (addition, subtraction, multiplication, and division) are continuous. Therefore in practice there is no harm done when a carpenter operates with $3.14159$ instead of $\pi$ in his shop.
If you use $\pi$ to $5$ places or $5,000,000,$ the answers will not differ by much, but they will differ. Is this an example of mathematics failing to deal with a simple, tangible, real world problem?
The difference in question is inconsequential to the problem's real world aspect, since the latter does not require infinite precision.
Even in an ideal world, irrational lengths $($i.e., a square's diagonal or a circle's diameter, to give just two examples$)$ are not actually constructed by way of infinite addition.
-
0When I first considered this it reminded me of the old "Greek Paradox" where you shoot an arrow at a target and somehow it never gets there. Well, the arrow does get there and with the proper equipment and technique the diameter and circumference of a circle could probably be measured down to plus/minus an atomic diameter. That's the real world I was speaking of. I understand that it doesn't matter in practical applications, but it would be interesting to see how that quotient compares to the other computed values. – 2017-01-20