Jump to content

Wikipedia:Reference desk/Archives/Science/2013 December 14

From Wikipedia, the free encyclopedia
Science desk
< December 13 << Nov | December | Jan >> December 15 >
Welcome to the Wikipedia Science Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


December 14

[edit]

What is the relationship between Double-slit experiment and Newcomb's Paradox

[edit]

What is the relationship between Double-slit experiment and Newcomb's Paradox? In Double-slit experiment the interference pattern disappears when you find out which slit the electron passes through AFTER THE FACT. In Newcomb's Paradox, you get the smaller amount $1000 if you chooses both boxes AFTER THE FACT. So are these two related to each other? - 220.239.51.150 (talk) 02:06, 14 December 2013 (UTC)[reply]

They are unrelated. Newcombe's paradox is based on a false assumption, that it can be predicted whether you will make one choice or the other, but it can't be told whether the predictor is 100% accurate, or whether you can change your mind after the prediction is made. Basically, a contradiction is assumed, and given any contradiction, all other contradictions follow. That is, for any A, if A & ~A, then for any B, both B & ~B. Quantum mechanics is weird, but it has nothing to do with contradictions or psychic predictions, and unlike Newcombe's Paradox, it can actually be tested. μηδείς (talk) 02:19, 14 December 2013 (UTC)[reply]
A minor correction - the logical statement should be "If A & ~A, then B, whatever B is." See Ex falso quodlibet. Tevildo (talk) 09:51, 14 December 2013 (UTC)[reply]
I don't see how thats a correction, since (A & ~A) > (B & ~B) is still quite valid. But thanks for providing the link. μηδείς (talk) 16:47, 14 December 2013 (UTC)[reply]
Newcomb's Paradox is interesting, but it seems to incorrectly suppose that the person's choice is predicted in isolation from the environment. For example, suppose that they zap me with their brain X-ray pen, then tell me to come back to the warehouse the next day and be ready to pay a $100 entry fee to get my prize. If I am the only one there, I'm going to assume it's some sort of scam, not even part with the money except if I get to open the box at the same time and see the reward money is real, and I'll want box A so I'm not taken for a sucker, that this really is some kind of crazy reality show. On the other hand, suppose all my friends and neighbors are there in a big line, and everyone picking oly box B is getting real millions, while those taking both are going home crying. Then my choice is going to be obvious, follow in with the winners and take only B. Or, for another example, suppose I find out that night that I need $200,000 to ransom a kidnapped relative. Then box A seems pretty useless and I'm pinning all hopes on B. So my "decision" as of the time of the scan isn't some simple A or B choice; it's a set of reactions to a wide range of circumstances. Therefore, the Predictor must know not only what I think, but what will actually happen in every possible way.
Having established that, we now have a situation with retrocausality comparable to the transactional interpretation of the double-slit experiment. Wnt (talk) 15:09, 15 December 2013 (UTC)[reply]
"Having established"? Having established what? You've just listed a bunch of paranormal situations that violate everything we know scientifically about how the world works, and are going to assume they are possible in order to draw a bad parallel with quantum physics? I don't know the name, but there's a fallacy that goes A is weird and B is weird, so A is B. You've just exemplified it perfectly. μηδείς (talk) 21:15, 16 December 2013 (UTC)[reply]
Wnt, maybe you should read the original description of the paradox (the first reference in the article), which sets the situation up in more detail than you imagine. Part of the premise is that many people you trust have already done this and they got $1000 iff they picked both boxes. I somehow doubt that people are that predictable, especially since it would probably occur to some of them to use a hardware random number generator, but anyway that's the setup.
Original poster, I'm not sure what you mean by "after the fact". You have to assume no retrocausality for Newcomb's paradox to have any force. If your choice can causally influence the contents of the second box then the argument for taking both boxes falls apart and there's no paradox. The original paper tries to drive this home by saying you have a friend who has known the contents of the second box for a week (since the prediction) but isn't permitted to tell you. This means, incidentally, that an attempt to tie Newcomb's paradox to quantum mechanics with some kind of collapsing superposition in the second box probably isn't going to fly. There's also no retrocausality in quantum mechanics, inasmuch as you can model any quantum system by a wave function whose evolution doesn't depend on future decisions by the experimenter.
The Wikipedia article links a couple of papers with "quantum" in their titles. I suspect they're bunk, but I haven't read them. -- BenRG (talk) 06:15, 17 December 2013 (UTC)[reply]
The friends may be a bad example (though it could still matter if you actually see it happen right before you pick) but there are still many external factors (books, movies, conversations, financial windfalls) that could affect the decision after the prediction is made. Wnt (talk) 17:31, 17 December 2013 (UTC)[reply]