0
$\begingroup$

It's said that you can't choose a random natural number. But what if you take a bijection between the natural numbers and, say, the rational numbers in the unit interval, and then choose a random rational number from that interval, and then take the corresponding natural? Why doesn't this work?

  • 0
    @Michael Hardy: Maybe it should be "almost all". Then us algebraists can interpret it as "all except perhaps for a finite number".2011-10-28

1 Answers 1

2

The question was thoroughly answered in comments:

You can choose a random natural number, you just can't assign all of them the same probability. The same is true of the rationals, for the same reason. -- André Nicolas

and

the problem is that probability needs to be countably additive (if you have countably many independent events, then the probability that at least one of them occurs is the sum of the probabilities of each one of them). For the natural numbers/rationals, that means that if each of them has probability $0$ of being chosen, then the probability that any one is chosen is $0$ (because you can obtain the total probability by adding all the individual probabilities). -- Arturo Magidin

I'll answer the follow-up question

Now I'd like to know how you could modify that requirement [countable additivity] so that choosing a random rational number would be possible

-- the natural weakening of countable additivity is finite additivity. It is possible to put a finitely additive measure $\mu$ on $\mathbb Z$ so that the measure is shift-invariant and $\mu(\mathbb Z)=1$. See invariant mean. Every finite set gets measure zero. The set of even numbers gets measure $1/2$, formalizing the statement "half of all integers are even". So far so good. But there is no canonical (or even explicit) invariant mean $\mu$; their very existence relies on the axiom of choice. So, it's not clear what concrete and nontrivial probability statements one can get out of the existence of $\mu$.