Let us revisit Kavka's Toxin Puzzle, with the assumption that the billionaire can perfectly predict your intent. The problem is reproduced as follows:

An eccentric billionaire places before you a vial of toxin that, if you drink it, will make you painfully ill for a day, but will not threaten your life or have any lasting effects. The billionaire will pay you one million dollars tomorrow morning if, at midnight tonight, you intend to drink the toxin tomorrow afternoon. He emphasizes that you need not drink the toxin to receive the money; in fact, the money will already be in your bank account hours before the time for drinking it arrives, if you succeed. All you have to do is. . . intend at midnight tonight to drink the stuff tomorrow afternoon. You are perfectly free to change your mind after receiving the money and not drink the toxin.

Let us reiterate here that the action of the predictor is based off the intent of the person, not the action.

Let us look at the notion of intent.

Intent is defined as the resolve or determination to do something. However, in real life, both resolve and determination are tempered by reality. I could intend to drink the toxin tomorrow afternoon, but if I lose the toxin before then I will not be able to drink it. I did not intend to lose the toxin; reality merely made me unable to do drink it.

A predictor based off intent would take the mind-state of the agent and simulate it at the time of action (skipping any intervening time in between), and basing its prediction off the actions of that temporally-displaced agent.

Thus, if you lose the toxin, or get hit by a bus, or destroy the toxin by putting it into a Schrodinger's Box and independently iterate it a thousand times, you would still get the payout if you still intended to drink the toxin if it survived whatever treatment you put it through.

These intents can be phrased as "if I don't lose the lose the toxin, I will drink the toxin", "if I don't get hit by a bus, I will drink the toxin", and "if the toxin survives a thousand iterated Schrodinger's Boxen, I will drink the toxin". But these are all probabilistic, and there is still a chance that the toxin survives, and that you still have to drink it.

How about "if I don't receive the one million dollars, I will drink the toxin"?

The predictor, temporally displacing the agent, would see that the agent would drink the toxin, and thus would award the million dollars, and is the solution to our problem.

However, if the predictor looks at the action rather than intent (and thus causally simulates the time between the intent and action), given that the agent obtains the million dollars, the agent would not drink the toxin. But given that the agent does not receive the million dollars, the agent would drink the toxin.

The predictor is stuck in an infinite loop.

The problem for the prediction of action lies in the backwards flow of information. Given that the predictor is perfect, it is functionally identical to a time machine, and, if the action is casually influenced by information obtained from the predictor (the presence of money), the future no longer holds true. A classic paradox.

(Or, avoiding the time travel explanation, since the action is casually influenced by the payout, and the payout is causally influenced by the predicted action, which is casually influenced by the predicted payout (ad infinitum), the predictor must have an infinite number of models of itself nestled together, infinitely recurring, making it impossible to decide.)

Newcomb's Problem elegantly solves this exploit in two ways.

The first is disallowing random factors in is prediction. If you attempt to flip a perfect coin or use a Schrodinger's box, it will automatically put nothing into the second box. And assuming there are no hidden factors that the predictor cannot predict (as it can simulate the entire universe, and know precisely if you would be hit by a bus), the predictor can predict your action with certainty.

The second is that there is no backwards flow of information. The predictor in Newcomb's Problem presents you with a choice before its predictions can causally affect you. Yes, the money is either in the second box or not, but the presence of the money has no causal effect on your decision (the box is impenetrable to anything, even gravity waves. Or I could just sigh and point out that this is an abstract problem). Thus, the predictor is able to accurately predict the future and put the money in the box before you make your choice. (And without all the timey-wimey nonsense, the predictor does not require a model of itself; it only needs to simulate the actions of the agent, as whether or not the predictor puts the money in the second box has no interaction with the agent's decision.)

Nothing bans information from flowing backwards in time. However, information from the future cannot have any causal interactions with anything that can causally influence the information, otherwise a paradox can result. And, by corollary, any system whose prediction causally influences the action is bound to have infinite recursion.

Given the nonexistence of perfectly random events, only by following this can predictors be perfect.

Tagged with logic, philosophy
Posted on2014-08-31 04:56
Last modified on2014-11-02 22:59

Comments (0)