5
$\begingroup$

What would be the advantage of accepting non-measurable sets?

I personally feel that non-measurable sets only exist because of infamous Banach-Tarski paradox...

  • 1
    @Gedgar: In Garnir's paper where he proved automatic continuity for a class of topological vector spaces (including Banach spaces) he describes the three parts of functional analysis (in his view, from the early 1970's) as constructive (in ZF); classical (in ZFC); and Solovayan (in Solovay-like models). I wonder what he would say about this today...2012-07-18

2 Answers 2

17

One correction to your question is that non-measurable sets actually proved to exist by Vitali in 1905, his construction of a non-measurable set is now called a Vitali set (assuming the axiom of choice). The Banach-Tarski paradox appeared about two decades later in 1923.

There is no immediate advantage in accepting the existence of non-measurable sets. In fact it "harms" us in some way, it means that we have to be more careful in how we define measure and so on.

However there is a great advantage in accepting the axiom of choice, or at least the ultrafilter lemma (which is a weakened version of choice), both implying the existence of non-measurable sets. In fact much weaker claims than the axiom of choice imply the existence of non-measurable sets. To name a few:

  1. The [weak] ultrafilter lemma,
  2. The Hahn-Banach theorem (which also implies the Banach-Tarski paradox),
  3. The real numbers can be well-ordered,
  4. Every family of pairs has a choice function.

To read more, you can try Herrlich's wonderful chapter about measurability in his book The Axiom of Choice.

Whether or not to accept such existence boils down, in essence, to what you are trying to do. If you want to do finitistic mathematics, deal with finitely generated objects and a limited collection of their subsets then there is no harm in not assuming the axiom of choice.

However if you wish to deal with infinitely generated objects, such as $\ell_2(\mathbb N)$ or other measure theoretic necessities, then the axiom of choice is usually needed to allow a "smooth" transition from finitely generated objects to infinitely generated objects.

The key problem is provability, a lot of properties depend on the axiom of choice and we simply cannot prove their truth value without it. So you end up having to assume a lot more than simply saying "assume choice". In this aspect, assuming the axiom of choice helps both to decide a lot of properties (but not all, of course) as well allows immediate generalizations of the proofs to higher cardinalities.

To read more:

  1. Advantage of accepting the axiom of choice
  2. Is Banach-Alaoglu equivalent to AC?
  3. Foundation for analysis without axiom of choice?
  4. Axiom of choice and calculus
  5. Number Theory in a Choice-less World
  6. Can one construct a non-measurable set without Axiom of choice?
11

I'm sure you will get very competent answers discussing the relation between non-measurable sets and the axiom of choice and the like and the reltion between finitely additive and $\sigma$-additive measures.

Here I want to give a fairly practical reason why it might be preferable to have a measure not defined on the whole power set but a smaller $\sigma$-algebra.

There is no intuitive notion of measure or volume for arbitrary sets of points. We do have, however, a fairly good notion of volume for certain geometrical objects, such as rectangular blocks. It is natural to extend our notion of volume from elementary objects to more complicated objects by approximating the more complicated objects by simpler objects. For example, we can approximate the volume of a ball by approximating it by the disjoint union of many very small cubes. Now there is no reason why we should be able to approximate every set of points meaningfully by simple objects we know the volume of. So if we want to assign a notion of volume or measure to every set, we are going to have to make some ad-hoc choices.

The argument has even more bite in probability theory. Say we want to describe a probability measure on the real line. The usual way to do this in practice is by specifying a cumulative distibution function, which essentially pins down the measure of each interval and then using a result that says that a probability measure on the Borel $\sigma$-algebra is uniquely determined by its values on each interval. If we want to assign a measure to every set, we have to specify lot of values for sets that do not even occur in practice.

  • 0
    @MichaelGreinecker: The notion I am using is not standard, and requires a reasonably rigorous development separate from the development of ZFC. The idea is to make sets countable collections, and make the real numbers a proper class. In order to make this convienient, you need to show that the constructions of maps of real numbers, functions of real numbers, can be constructed just as easily in this different perspective, and that requires a heavier slog than the comments here. I will write and link to a development of such a theory. One also needs to embed all countable models of ZFC.2015-06-01