(This expands on some ideas I touched on in the post about the single-event probability fallacy. If you have a sense of deja vu, that’s why. It’s a different angle in the same ideas.)
Probability theory is abstract mathematics. It has the same axioms as measure theory (plus one that says the measure of the whole space is 1, but that’s really just a convention), and it focuses on different things. As a theory, it has applications.
One is to the frequencies of outcomes of repeated events, such as rolling a dice, making a component by machine tool, or the path of a small particle surrounded by fast-moving smaller particles. With a suitably set-theoretic understanding of what ‘events’ and ‘outcomes’ are, probability theory can be shown to apply to such frequencies.
Another application is to betting odds, though here probability theory does not apply as a description but rather as a prescription. If the betting odds are to be ‘fair’, that is, if the odds don’t favour the bookmaker or the customer, those odds must follow the laws of probability.
The same applies to the idea of ‘degree of belief’, whatever that means and however we measure it. If those degrees of belief are to be consistent, they must follow the laws of probability. Betting-odds and degrees of belief are sometimes called subjectivist probability.
In earlier and less enlightened times, there were heated arguments over which was the ‘real theory of probability’, and both sides missed the point that they were discussing different applications of the same abstract theory, and as a result were having an argument about whether over-easy or well-done was the correct way of cooking eggs.
In addition, there was something called ‘The Principal Principle’ stating that the rational degree of belief in the outcome of a repeated event is its frequency. The result is that, if we are talking about repeated outcomes, probability means frequencies.
This leaves the question about what we might mean by the probability of single events and how it might be measured. The ingenuity of some answers rival the madder interpretations of Quantum Mechanics. Some of them turn out to be frequencies in disguise, as is the Possible Worlds interpretation. (I’m not going to describe that: it’s like the Multiverse and just as non-empirical.) It’s not that those interpretations don’t work: it’s that only about forty people at any given time can understand them, and none of them work as statisticians. So whatever the working statisticians might mean, it’s not what the ingenious people suggest.
Personally, I think that phrases like ‘I don’t think that’s very likely’ or ‘I wouldn’t be surprised’ or ‘That’s probably what happened’ are figures of speech, referring, if to anything, to something that does not have to obey the probability calculus. There is no obligation on the figurative speech of ordinary people to obey rules made up by mathematicians. People do believe things, and that belief may be a bodily sensation, as the disappointment of a belief often is. Maybe those figures of speech are about the strength of those belief-sensations. We can, of course, say that if those belief-sensations are to be rational, they need to obey the probability calculus, but what we can’t say is that if they don’t, then ordinary people should not use probability-words to express their beliefs. Ordinary language got there first.
Similar issues affect the idea of the expected value. The expected value, or expectation, or prevision in French, or average in GCSE arithmetic, is a mathematical construction. It’s the sum of the probability-weighted outcome values. The formal expected value of a single roll of a fair dice is 1/6+2/6+3/6+4/6+5/6+6/6, which is 3.5 and that’s never going to appear on any roll of a six-sided dice. (A fair dice has no modal value - or perhaps it has six - and its median is any value between 3.00000000...1 and 3.999999999999... ) as well: half the throws will be below a number between 3 and 4 and the other half will be above it.
In a game with payoffs of £0 and £100, with equal odds, the expected value is £50, but that will never be the result of an individual trial: the payoffs are £0 or £100. It is what we would expect to be the long run average value of the payoff per trial. However, an actual sequence of trials that ever reached and stabilised at £50 after a ‘reasonable number’ of trials would be quite rare: what we should really expect is that the actual average payoff per trial should appear to converge to £50 as the number of trials increased. Measuring an expected value in practice is much more complicated than calculating it.
We can always make a formal calculation and, rightly, call that the expected value. But we must ask how that value is to be measured, and if it can’t be, or only has a meaning in some series of counter-factual logical universes, then it remains a formal calculation with no practical application. We can calculate the expected value of a one-off event, but we can’t measure it. Measuring expected values is a process that refers implicitly to a run of outcomes. The formal calculation for a single event is correct, but formal correctness is no guarantee of empirical application.
Since the formal expected value of our game has no empirical meaning for one event, it can’t be a guide to any decision we make. This has, as I’ve discussed before, some consequences for so-called rational economics.