I got into a tiff the other day over Newcomb's "Paradox". My position is simple: there is no paradox, only a simple payoff table with a logical constraint and an easy choice. Paradox only comes in when someone decides not to play by the rules of the thought experiment, which defeats the entire purpose of a thought experiment. But more interestingly than that, it seemed that our debate came down to a mathematical argument on my side, and a logical one on the other. So the interesting point to me is: why do I place so much more trust in math than logic, and what is the difference?
Math and Logic both have perfectly precise, rigorous forms. An argument necessarily follows formally if it is admissible as an argument at all. And any element used in both fields strictly according to their formal structure is rigorously defined. Maybe we could go so far as to say that the form of mathematics is logic; that is, every single argument structure and basic element general to the field as a whole is taken from logic. But that point need not be made now.
The significant difference is the content. Mathematical content is also rigorous and well-defined. A line is defined by its relations and functions; anything at all which satisfies those relations is a line, and anything which fails them on the minutest point is not. A number 3 is a number 3; not 2, not 4, and 5 is right out. It's not even 3.00000000000000000000000000000000000000000000000001. So, the form of a mathematical argument has this perfect structure on which it is arguing. If you can count anything in reality, if quantity at all applies, then the math will be perfectly rigorous so long as you don't decide to change what you are counting.
Logic, on the other hand, has a serious flaw at this point: it is only the form of the argument. The argument itself must be populated with outside information, and in the end the argument is only as precise as its content. If I say that my cat is either in the room or not in the room, this would be perfectly fine as far as logical form. But what it neglects is that "cat", "room", and "in" are not well-defined. What if my cat is standing in the threshold? Does "in" cover this? What is the essential part of the cat? How far does the room extend? And so we either introduce limits which are not part of our standard talk about such matters, or we admit that the logic is imprecise and it fails in application. And if we introduce limits, these limits are either arbitrary or well-studied; and too often they seem to be the former.
And this seems to be the case in most situations that we care about: the concepts over which we are arguing are not well-defined. What is a soul, at any rate? A true metaphysics would avoid this problem and resemble mathematics in this respect, but it's debatable about whether such does exist.
So, in Newcomb's problem, my argument was that one should clearly choose box B. Set up your payoff table: you have 4 boxes with their respective values, but 2 boxes are inaccessible by hypothesis. Therefore, one picks both boxes and receives $1,000, or just box B and receives $1,000,000. All other discussion must build off of this basis, or it is no longer talking about the world of the thought experiment (except perhaps to say that a Predictor is incoherent, which is just a debate over the old issues of divine foreknowledge and future contingents).
The rival argument goes as such: when you enter the room, either there is something in box B or not. Either way, you are better off taking both boxes. This argument then tries to introduce issues like backwards causation and the like to discredit the other side. But to force the entire choice into a single dichotomy distorts the problem. True, either there is something in B or not; but in both cases, there are two described payoffs: one for whether the Predictor sees you picking both, and another if the Predictor sees you picking B. A rational decision must always consider the payoffs stipulated; the argument for choosing both boxes implicitly tries to sneak in its own payoff table, which is only disagreeing with the problem as set up.
Claiming that I must be adding in some metaphysical mumbo-jumbo is not a valid response. The situation could be this: assuming determinism, there is a single causal nexus N at time T1 which will express itself in the Predictor making her predictions at T2, filling the boxes at T3, you walking in the room at T4, and you making your choice and receiving your payment at T5. There is no backward causation, because it is the same causal nexus which determines all events; T5 just got expressed later than T4, but both were equally caused and mutually conditioned at T1 along with the prediction itself. Now, this scenario does not seem to me to have any problems relating to backwards causation and the like. But if it works, then we can go back to saying that the Predictor just knows irrespective of the explanation how, and we should have the same solution.
3 comments:
Given recent advances in neuroscience, which enable a machine to detect your decision to (say) push a button with your left hand (as opposed to your right hand) a fraction of a second before you are aware of having made the decision, the intriguing possibility arises of actually carrying out Newcomb's dilemma in the lab.
That is, we have a clock that counts down to zero. At the moment the clock shows zero, you have t seconds to decide whether to push Button 1 or Button 2, corresponding to the two choices of Newcomb's problem. At time zero, the machine puts money (or not) into the opaque box according to its prediction of your decision. (If you press both buttons, or neither button, by the time t seconds elapse, then you get nothing.) The value of t is chosen to be small enough so that the machine can reliably predict your choice, but large enough so that you have the subjective impression that you are making your decision after the clock reaches zero.
This experiment ought to be feasible with current technology, but I haven't heard of anyone actually performing it.
Hello Timothy!
Thanks for the example - that does sound interesting.
Timothy Chow? I checked your webpage and confirmed, yes, you are my former math camp professor from UMich. Small world. How did you discover Michael's blog?
Post a Comment