Monday, 28 November 2016

Newcomb's Problem

This appeared in the Guardian recently.
The problem: two closed boxes, A and B, are on a table in front of you. A contains £1,000. B contains either nothing or £1 million. You don’t know which. You have two options: Take both boxes, Take box B only. You keep the contents of the box/boxes you take, and your aim is to get the most money.

But here’s the thing. The test was set by a Super-Intelligent Being, who has already made a prediction about what you will do. If Her prediction was that you would take both boxes, She left B empty. If Her prediction was that you would take B only, She put a ₤1 million cheque in it.

Before making your decision, you do your due diligence, and discover that the Super-Intelligent Being has never made a bad prediction. She predicted Leicester would win the Premier League, the victories of Brexit and Trump, and that Ed Balls would be eliminated from Strictly Come Dancing. She has correctly predicted things you and others have done, including in situations just like this one, never once getting it wrong. It’s a remarkable track-record. So, what do you choose? Both boxes or just box B?
This is supposed to puzzle people. And puzzles that don’t seem to have a decent answer usually arise because they aren’t a decent question. Anyway, it originated with a physicist - a descendent of the brother of the famous Newcomb - and was popularised by Robert Nozick, and then Martin Gardener at the Scientific American. See where I’m going with this?

Suppose I say to a bookie: if I think Fancy Girl will win the 2:30, I will bet £100, and if I think Blue Boy will win, I will bet £50. His reply would be: all right which is it? I can’t place a bet that’s conditional on what I think will happen: the whole point of a bet is to pick one of the outcomes. The closest I can get to making a conditional bet is to put money on each outcome, and if the bookies are doing their job well, I will lose doing that.

What you want to do is this:
If I chose Box B alone, she will have predicted that and put the cheque in it. But if I chose both boxes, she will have predicted that and not put the cheque in. So I should choose Box B.
This assumes what the Special Theory of Relativity tells us cannot happen, that a future event can cause a past one. So let’s try this:
If she predicted that I would chose Box B alone, then she put the cheque there, and I should choose it. If she predicted I would choose both boxes, then she wouldn’t have put the cheque in Box B, so I should choose both boxes, because at least I’ll get £1,000.
The catch is that doesn’t tell you what to do, since you don’t know what she predicted and so can’t detach the consequents from the conditionals. The next one is silly...
If she predicted that I would chose Box B, then she put the cheque there and I should choose it. If she predicted I would choose both boxes, then she wouldn’t have put the cheque in Box B, so I should not choose both boxes, only Box B
That sounds good, but since there’s no cheque in Box B, you get nothing. But what you were going to do was this:
Suppose I choose Box B. Since her predictions are perfect, she predicted that and the cheque is there. But if I choose both boxes, again since her predictions are perfect, the acheque isn’t there. So I choose Box B.
This doesn’t require backwards-causality, but it does require someone to ensure the predictions are perfect. Russian hackers, presumably.(*) What we’re told is that she’s good, not that the game is rigged.(**) Now try this:
If she predicts Box B and I choose Both, I get the cheque. If she predicts Both and I choose B, I get nothing. If she predicts Both and I choose Both, I get £1,000. If she predicts B and I choose B, I get the cheque. So if she predicts B, I get the cheque no matter what I do, and if she predicts Both I lose if I choose B. So I take Both Boxes.
Those are the actual options assuming free will and imperfect predictions. The only way you get confused is to assume a) that her predictions are causal, or b) that your actions are temporally-backwards causal, or c) that someone is rigging the co-incidence between her predictions and your actions.

So how seriously you take her past performance on predictions? This starts to make it sound like we might want to use Bayesian Inference, and indeed the Wikipedia entry for this problem lists David Wolpert and Gregory Benford as having a Bayesian analysis that shows that the different arguments arise from different models of the assumptions, so that there isn’t a real paradox, just an old-fashioned ambiguity.

The real reason you choose both boxes In the Guardian’s example is this: it’s the only way you get anything. She’s a woman: the point was to get you to choose Box B, and now you have, by Briffault’s Second Corollary, she doesn’t have to give you the money, so she cancelled the cheque (***).

(*) Topical political joke.
(**) Another topical political joke.
(***) Robert Briffault

No comments:

Post a Comment