20
$\begingroup$

I apologize for my total ignorance in the sphere of mathematics and the possibly very silly question I'm about to ask. My mathematical knowledge level is quite limited (pretty much finished with some slightly more advanced stuff then grade 12) so please if possible limit too much terminology to about that level of math. Again I don't mean to offend anyone & I'm sorry if the following sounds like a joke but I am genuinely interested and cannot quite grasp the reason for it.

I've been curious for quite sometime now, What is the significance for a mathematics to frequently require proof for both finite & infinite cases of theorems ? Why isn't it satisfactory to prove any theorem for a reasonably high finite x (whatever x is - be it set of some numbers) ? The reason why I'm asking is that in real-life applications (not talking about software application but life applications like count a bag of money or something like that) there is likely never need to deal with infinite of anything really - it might be a very high quantity but never infinite. So why does mathematics needs and requires proof for the infinite case as well, instead being satisfied proving only finite case ?

Thanks for any advise!!

  • 5
    One of the reasons is that math is often done not because of its applicability to real life, but for fun. Also, showing something for any x makes life much easier, because you know that your formula/theorem applies regardless of the quantity and there is no need to check your methods every time.2011-01-29
  • 0
    Thanks InterestedQuest. So this is something I'm confronted to at my job as programmer - make it work for all cases if possible. So I know what you mean but I'm wondering why not just set the boundary of x really high so that you don't have to check your methods for like 100 years or so. This is not too bad I think, it's practical at least. I posted the question here http://mathoverflow.net/questions/53675/a-non-mathematicians-programmers-question-on-infinity and got a nice response: because sometimes it's easier to proof for infinite than finite case which sounds practical & useful2011-01-29
  • 3
    @mikiyfi: I think there is a separate confusion here. For instance, while the set of all integers is infinite, *each* integer is itself finite. When dealing with, say, $\mathbb{Q}$, the collection of all rationals, while anything you will *do* with rationals involves only finite quantities, statements about "finite fields" do *not* apply to $\mathbb{Q}$ because the *set* of all rationals is infinite. A proof that something holds for integers almost never deals with infinite things, even though it applies to an infinite *collection* (all integers), etc.2011-01-29
  • 2
    Mathematicians often regard numbers as "real" objects with a life of their own. It is often only our curiosity that drives us to try to understand these objects and their lives. We want to uncover laws, rather than establish facts. From this point of view, knowing that there are no odd perfect numbers under $2^{100}$ would be pretty boring, while knowing that there are none at all would be interesting, because such a proof would hopefully explain this phenomenon. A computer search explains nothing.2011-01-29
  • 0
    Thanks for the response Alex Bartel! So it's more or less the human curiosity that drives this I suppose ? I think you are right, this may be the reason why the infinite cases are studied. thanks again.2011-01-29
  • 1
    There is also the reason that others have mentioned before: it is often easier and less time consuming to _prove_ s.th. with your brain power or to establish easy criteria that you can quickly check any time you want to know if an object has property $X$, than to run long computer experiments (even for numbers that practical people care about, checking something numerically can take days, or weeks) every time you are handed a new such object. So even practical people who research differential equations, say, try to prove things, rather than solve each eqn numerically on a case by case basis.2011-01-29
  • 0
    thanks so much Alex Bartel. I think I understand now. Have a great day.2011-01-29
  • 0
    thanks for the correction Arturo Magidin, yes I know the difference between 'proof' and 'to prove' but I'm tired and English is not my first language so there it is2011-01-29
  • 2
    @mikiyfi: I only mentioned it because after I corrected it the first time, you changed it back (English is not my first language either, for that matter).2011-01-29
  • 0
    @mikiyfi: Also: if you preface your comment with `@`, and that person has been commenting/answering, then they'll get a notification that you've addressed something to them.2011-01-29
  • 1
    @Arturo Magidin: thanks for the pointers, I didn't realize I changed it:) Sorry but I see what happened, you were edited it and I was editing it at the same time so I overwrote your change without even seeing it actually. Thanks for your help!2011-01-29
  • 0
    It may not have any "applications" in the "real" world. But who cares? You do it because $\infty$ is beautiful.2011-08-13
  • 0
    I know it's late, but I can't help but add: http://math.stackexchange.com/questions/111440/examples-of-apparent-patterns-that-eventually-fail2012-04-05
  • 0
    Real-world example: how many parts can you divide a second into? We need to know that an theorem that contends with time can stay applicable no matter how precise our measurements of time become.2012-12-18

6 Answers 6