Thursday, 18 January 2018

The Single-Event Probability Fallacy

Here’s a choice. I can give you £50. Or you can play a game which gives you a 50% chance of winning £100. Which do you want to do?

Mostly people take the money. All behavioural economists, including a couple of guys with the ‘Nobel’ in Economics, think that makes us irrational. Well, maybe not.

Here’s the really simple rebuttal.

When the outcomes have equal expected values (probability-weighted values), as this game does, then, by definition, expected values can’t help us decide. In that case, we bring on some other idea or rule. Many of these will work in some circumstances, but will fail for some cunningly-constructed example. That failure doesn’t mean the rule is useless in every case, just those cases constructed by a behavioural economist to make a point. Here’s my rule: “I know what you’re thinking: which way will the coin fall? Well, seeing as how this is a behavioural economics experiment with a very limited budget, you gotta ask yourself: do I feel lucky? Well, do you? Do you feel lucky?” Yep. Thought so. Take the £50 instead.

When this example is presented, at least by a cute French PhD pitching their start-up, there’s a suggestion that given two options with equal expected values, we ‘should’ choose the one that offers the largest pay-off. What that proves is that I’m conservative, and the cute French PhD is a risk-taker, which is why she’s in a start-up and I’m a wage-slave.

If the algorithm her company is developing treats my decision as a guide to my temperament, I almost don’t have a problem.

Except. There’s always an unstated assumption that you have one shot at playing the game. Investment decisions are one-shot, unless we keep changing our minds, when the fees will eat up any gains we make. Now here’s the catch: the kind of probability that applies to single events is not the kind of probability we can use to calculate expected values.

We talk in an ordinary way about single events being likely or unlikely, and while it’s never exactly clear what we mean by it, one way to think of it is that we should devote our limited resources to the the option we consider ‘most likely’ and that we can also do something about. Whatever we do mean by talking about the outcomes of a single event being likely or unlikely, it isn’t the frequency of those outcomes over a run of that event. Because we’ve only got one.

We can have ‘degrees of belief’ about the outcome of single events - because we can have ‘degrees of belief’ about anything. But it make no sense to multiply degrees of belief by pay-offs and treat the result as something real.

Suppose you’re playing a game where outcome A pays off £50 and outcome B pays nothing. The odds of A to B are 50:50. The expected pay-off is £25 per play. In the long run. This is frequencies. It makes sense to multiply frequencies by pay-offs, because that’s a shorthand way of playing the game in the long run. The calculation corresponds to something you can observe.

Now suppose you multiply the pay-offs by your degrees of belief. The result will be £25, with a degree of belief of 100% (that it’s the correct expected value, not that it’s what you will win per play). But that’s not a pay-off of the game. It corresponds to nothing real. You cannot believe that on one shot of the game you will win £25, since the only outcomes are £0 or £50. So multiplying degrees of belief by pay-offs is not always meaningful: in fact, it is meaningful only when those beliefs are about frequencies, and it’s only accurate when the degrees of belief correspond with the long-run frequencies.

That’s the mistake the behavioural economists make in the first example. They think that we should calculate an expected value for the one-shot we have at the £100-or-Nothing offer, see that it is £50 and say that it's equivalent to the other offer. Except it isn't. The expected £50 is a fiction. The other £50 is a certainty. Somewhere in the back of our minds, most of us can see the difference. The behavioural economist cannot.

So does the example we started with really prove that I am more risk-averse than a cute French PhD? No. Because one of the choices does not involve any risk at all. The example is testing my preference for a sure thing over a game of chance. Only a degenerate gambler takes a risk over a sure thing. What would measure risk-aversion?

Here’s a different game. You can either spin Wheel One, which will pay off £50 with 50% probability, or Wheel Two, which will pay off £100 with 25% probability. Same expected value of £25. That is a choice between risks.

Does this matter? Financial regulators have lapped up behavioural economics. They think it’s telling them about we make irrational decisions, or maybe just bad ones, and how they can make regulations to stop the Nasty Banks from cheating us. Start-ups are designing programs which incorporate, in a Regulator-approved manner, these ‘insights’ for banks and advisors to use when selling you investments, and they have big-name clients. So yes, this stuff matters.

No comments:

Post a Comment