2
$\begingroup$

This is a rewrite of the question. See the original here.

For a given probability field $(\Omega, A, P)$, a random variable defined on that field would be this (I'll use a real-valued random variable for simplicity): $$X : \Omega \mapsto \mathbb{R}$$

As I understand it from the comments, this should be interpreted as this: having observed a random outcome $\omega \in \Omega$, what are the consequences of that outcome?

As an example given by Dilip Sarwate, if I toss a coin and heads comes up, I win \$1.00; if tails comes up, I lose \$0.50. In that case, the random variable would be $X(H) = 1.00,\ X(T) = -0.50$.

If I got all of the above right, here's my question. Random variables (ie. consequences of a random outcome) seem like a very high-level concept. Why are they necessary within an abstract mathematical theory such as probability theory?

  • 0
    You want to associate a number to each outcome. Suppose that $\Omega = \{H, T\}$ for a coin tossing experiment and you win $\$1$ if the outcome is $H$ and lose $\$0.50$ if the outcome is $T$. Your winnings (in dollars) are represented by a random variable $X$ that maps $H \to 1$ and $T \to -0.50$ whose expected value is $E[X] =1\times P(H) + (-0.50)\times P(T)$ etc. That is the reasoning behind why the map is from $\Omega$ to $\mathbb R$, not the other way around.2012-05-06
  • 0
    @DilipSarwate So the purpose of $X$ is not to randomly pick a value $\omega \in \Omega$, but already given such a value, map it to a real number that, in a wordy explanation, represents the consequence of that random result?2012-05-06
  • 2
    Yes. **You** don't get to pick the outcome of a trial of the experiment, randomly or otherwise, because nobody trusts you to pick in accordance with the probability measure already defined. You get to _observe_ the occurrence of a trial and to observe the outcome that occurred, and take actions accordingly. Note by the way that a random variable $X$ is a _fixed_ mapping; you don't get to say "The outcome is $H$; I think I will let $X$ map it to $3.1415926$ this time: tomorrow is another day."2012-05-06
  • 0
    @DilipSarwate So $X$ should be viewed as consequence. That seems like a very high-level concept. Why is it needed in the general theory?2012-05-06
  • 0
    @RahulNarain You are right.2012-05-06
  • 0
    @RahulNarain I misunderstood the purpose of a random variable. I rewrote the question.2012-05-06
  • 0
    @DilipSarwate I rewrote the question.2012-05-06
  • 1
    Technically speaking, elements $\omega$ of the sample space $\Omega$ are called *outcomes* while *events* are subsets of $\Omega$. Hence $\omega$ in $\Omega$ is **not** an event, while $\{\omega\}$ may be or may not be an event, depending on the sigma-algebra under consideration.2012-05-06
  • 0
    @Didier That's right. I corrected my question.2012-05-06
  • 0
    Not quite completely, but nevermind. Let me try to summarize your question (irrespectively of whether I agree with your premises or not): (i) Probability theory is an *abstract mathematical theory* (AMT). (ii) Random variables are a *very high-level concept* (VHLC). My question: Why is it surprising that a given AMT involves some VHLCs? After all, this is exactly what AMTs are supposed to do, wouldn't you say?2012-05-06
  • 0
    @Didier This is deviating a bit into a possibly never ending discussion, but I'm more of the opinion that abstract theories should deal only in abstract notions and leave high-level details to each specific application of the theory.2012-05-06
  • 0
    No deviation intended. I tried to summarize (what I could make of) your question, you might want to confirm this is what you intend to ask. With the current formulation, in view of (i)-(ii), it seems only natural to see that random variables (aka VHLCs) enter an AMT such as probability theory.2012-05-06
  • 0
    @Didier Yes, this is what I meant. What part do you think I should make clearer? :)2012-05-06
  • 1
    To suggest avoiding random variables in probability theory is analogous to suggesting that functions be avoided in analysis.2012-05-06
  • 0
    @AndréNicolas Okay, but why? :) (I wasn't suggestion anything, though, I was just asking.)2012-05-06
  • 0
    I think of probability as the study of random variables.2012-05-06
  • 0
    If random variables in probability theory are not abstract enough a concept, continuos function is not abstract enough a concept in measure theory.2012-05-06
  • 0
    Paul: you might want to make clearer why you rhink that the fact that some VHLCs enter an AMT requires an explanation.2012-05-06
  • 0
    @Didier It's the "very high-level" vs. "abstract" contrast that doesn't seem to fit. I don't think I understand random variables well enough yet.2012-05-06

4 Answers 4

3

Kolmogoroff's probability axioms and the notion of random variable are a prime paradigm of $20^{\rm th}$ century mathematics. To compute probabilities connected with coins and urns you maybe could do without random variables; but if you want to go at complicated stochastic phenomena like the weather you absolutely need them.

A random variable has a priori nothing "random" about it: It is a well defined function on a maybe huge "probability space" $\Omega$. An individual point $\omega\in\Omega$ may be the possible "world weather during 24 consecutive hours" and entail information about temperature, clouds, humidity etc. at all points of the earth at all times of a day. Contrasting this ocean of possibilities a real valued random variable $T$ could be the temperature at Kennedy Airport, New York, at 12.00 p.m., on a given day. Given $\omega$, the value of $T$ is well defined, but "chance" or "fate" chooses the point $\omega$ where $T$ is evaluated. Note that it is absolutely impossible to "observe" the point $\omega\in\Omega$ in its totality, but we can observe $T$ on any given day, and we are even able to observe the function $t\mapsto T(t)$.

Kolmogoroff's axioms allow to talk coherently about the "probability that it rains on three consecutive days at Kennedy airport" or about the probability that the temperature is $\leq 31^\circ$ Celsius there at $09.00$ a.m. tomorrow, without really dealing with the intricacies of the space $\Omega$.

0

The axioms of probability theory were designed to fit with the existing "layman's" notion of probability, i.e. the chances (a real number), of an certain event ($\omega$) occurring, from a set of given possible outcomes ($\Omega$).

In probability theory, like in most branches of mathematics - necessity is the mother of invention, not the other way round.

  • 0
    I never expected engineering principles to creep into math. :|2012-05-06
  • 0
    When you say this, "the chances (a real number), of an certain event (ω) occurring", are you talking about random variables? If so, isn't there already a function that does what you said, the $P$ in the field $(\Omega, A, P)$?2012-05-06
  • 0
    See my comment on the main question, regarding the use of the word *event*.2012-05-06
  • 1
    The point, I think, is that probability is not just an abstract mathematical theory invented for its own sake. It was very much motivated by real-world problems. And those real-world problems often involve random numerical quantities (e.g. how much do I win or lose when I play this gambling game) which are best modelled using random variables.2012-05-06
0

Probability theory grew out of the mathematical treatment of gambling. It came to live as a very concrete subject. There are several ways to formalize probability and it is possible to formulate probability in a completely point-free way. So you dispose of the space $\Omega$ and just work with a Boolean $\sigma$-algebra of events and a probability measure defined on it. Random variables can then be defined as Boolean $\sigma$-homomorphisms. If you want to recover the points, you can take $\Omega$ to be the space of maximal consisten descriptions in terms of events (they are ultrafilters). This approach is well described and motivated in the beautiful paper On the axiomatic treatment of probability (1955) by Jerzy Łoś. The cost is an additional layer of abstraction.

-2

I think that we need a random variable just because it is a number. The thing which we all have experience to deal with from the moment we start counting change at a grocery store.

  • 1
    But it's not always a number. A random variable can map to anything, not just numbers. I mapped it to real values in my post for simplicity.2012-05-06
  • 0
    @PaulManta The random variables which are not numbers are not necessary :)2012-05-06
  • 0
    You can map any (un)countable set to some part of $\mathbb{R}$, so they are not necessary in a practical sense, but I think they are necessary in a theoretical sense if you want to make the concept general.2012-05-06
  • 0
    @PaulManta: You can also map every set to a set with a single element. Why is that important.2012-05-06
  • 0
    Random variables are not numbers. Ever.2012-05-06
  • 0
    @MichaelGreinecker Map bijectively. I forgot to mention that.2012-05-06
  • 1
    @PaulManta: Not every uncountable set has the same cardinality as the real numbers.2012-05-06
  • 0
    @MichaelGreinecker Oh, I didn't know that. What examples of such sets are there?2012-05-06
  • 0
    @PaulManta: Cardinality can be complex, but a good starting poit is [Cantor's Theorem](http://en.wikipedia.org/wiki/Cantor's_theorem): For every set, the set of all of its subsets has a larger cardinality. In particular, there are more sets of real numbers than there are real numbers.2012-05-06