4
$\begingroup$

Bayesian probability interprets the meaning of the probability of a random variable as some degree of belief. But does this result in any difference in the interpretation of a random variable itself?

1 Answers 1

2

I agree with Edwin Jaynes that the word "random" should be banished from this context. Suppose you're uncertain of the average weight of male freshmen at Very Big University, which has 100,000 male freshmen. You have a complete list of their names, from which you can choose 30 at random and weigh them. You can't possibly afford the cost of weighing more than a few hundred and are not comfortable paying for even that many. Let's say you had a prior probability distibution specifying the probability that the average weight is between $a$ and $b$, for any positive numbers $a$ and $b$ that you might pick. Then based on the observed weights of the randomly chosen 30, you find a posterior distribution, i.e. a conditional probability distribution given those observations.

Next you could pick another random sample of 30 and further update your information.

What is random?

I would prefer to use the word "random" to refer to that which changes every time you take another sample of 30. Or of 20, etc. So the observed average weight of the students in your sample is a "random variable". But notice that we've also assigned a probability distribution to the average weight of all 100,000 male freshmen, and we cannot observe that quantity. That quantity remains the same when a new sample is taken; it is therefore not "random" in this sense. But we've assigned a probability distribution to it. By the prevailing conventions of standard Kolmogorovian probabilistic terminology, we are treating that population average as a "random variable". I would prefer to call it an "uncertain quantity".

However, this does not alter the mathematics. Is there a difference in "interpretation"? There is if by that one means: Are we interpreting the quantity to which we assign a probability distribution as being random, in the sense of changing if we take a new sample, or as a uncertain quantity that does not change when we take a new sample? The way in which one applies the mathematics is different; the axioms of probability are not.

This does raise a question of why the same rules of mathematical probability should apply to uncertain quantities that cannot be interpreted as relative frequencies or as proportions of the population, etc. A number of authors have written about that question, including Bruno de Finetti, Richard Cox, and me. Apparently no one gets very excited about the results because the result is that one should not use different mathematical methods. "Since there's no difference, who cares?" seems to be the prevailing attitude.

There are some who question whether countable additivity or merely finite additivity should be taken to be axiomatic. De Finetti is one of those. Dubins & Savage in their book Inequalities for Stochastic Processes assumed only finite additivity, but that may be only because they wanted to avoid some icky technical issues that might have taken them off topic.

I see that I haven't carefully cited all the works I've mentioned. Maybe I'll get to this later . . . . . . .

  • 2
    In a book about Kalman Filters I found the following story: Random variables can be compared with the Holy Roman Empire. Just as the Holy Roman Empire 1) __Not__ was an empire, 2) was __not__ roman, and 3) was __not__ Holy, a random variable is 1) __not__ random, 2) __not__ a variable. (It is just a measureable function!)2012-09-14