While I certainly cannot do better than Tao (and admit that I haven't read Qiaochu's links) I'd like to make the following point:
I see the reason for essentially restricting to $(\Omega,\Sigma, \mu) = [0,1]$ as a rather technical one. If one would like to have sequences of i.i.d. random variables, one needs to be able to form the product space $(\Omega,\mu)^{\mathbb{N}}$ - you need some form of Kolmogorov's consistency theorem. If you allow too nasty $\Omega$'s product spaces can be very degenerate. There are examples in Halmos's or Neveu's books where they show that a countable product can be carrying the trivial measure even if the factors themselves are far from trivial. The point is that $\Omega$ might simply be "too big" to be reasonable.
A very flexible and technically useful class of "reasonable" measurable spaces is the class of standard Borel spaces (for which Kolmogorov's consistency theorem holds, luckily). By definition these are the measurable spaces which are (measurably) isomorphic to the Borel $\sigma$-algebra of a complete and separable metric space. Here's the surprise (Hausdorff, von Neumann):
Every uncountable standard Borel space is isomorphic to $[0,1]$ with the Borel $\sigma$-algebra. Moreover, every non-atomic probability measure on a standard Borel space is equivalent to Lebesgue-measure on $[0,1]$.
So from this point of view there is essentially no restriction in assuming $\Omega$ to be $[0,1]$ to begin with. Of course, atoms are not really an issue. Since we deal with a probability space, there are at most countably many of them, so we'll just get a union of an interval and some countable set.