Re: [math-fun] best published explanation of the Monty Hall Paradox?
<< In August 2001, I argued stridently on the fledgling Wikipedia that switching was irrelevant, and wrote code to prove it, only to be proven wrong by my own code. I posted the code as an "empirical example" subpage off the main article. Unfortunately, the current article no longer includes an executable demonstration.
The key point in the explanation of why switching is better is that the showing of a goat does not affect the probability of 1/3 that the original door is correct. If you agree to that, the other steps are pretty routine and easy to accept. So a good explanation should address this issue. The reasoning is simple: After the initial guess, the two remaining doors play identical roles with respect to what the player knows. The player knows that at least one of them hides a goat. So when a goat is shown, that gives the player no new information about the original guess. This is true regardless of what algorithm may be used to choose the goat door, as long as the player doesn't know the algorithm. And so the probability of 1/3 cannot change. --Dan _____________________________________________________________________ "It don't mean a thing if it ain't got that certain je ne sais quoi." --Peter Schickele
On Wed, Oct 28, 2009 at 4:09 PM, Dan Asimov <dasimov@earthlink.net> wrote:
The reasoning is simple: After the initial guess, the two remaining doors play identical roles with respect to what the player knows. The player knows that at least one of them hides a goat. So when a goat is shown, that gives the player no new information about the original guess.
This is true regardless of what algorithm may be used to choose the goat door, as long as the player doesn't know the algorithm.
I agree that if some statement is made that is equivalent to "at least one of the two doors contains a goat", such as the revealing of a goat by someone who knows which door is which and always shows a goat, then there's no new information. But if someone randomly opens a door, and it contains a goat, then you have information like "door B contains a goat" ... quite a different thing. The problem is what you can deduce from a statement like "door B contains a goat". If you know the algorithm by which door B was chosen, then I think we all agree that there are many possibilities for how that impacts the probability of door A containing a goat (still 1/3, or 1/2, or perhaps you can even get 0 or 1 from some of the situations discussed earlier). But what should you do if you don't know the algorithm, and don't know the knowledge of the door opener? It seems to me that rather than assuming "door opener knows what is in all the doors and always reveals a goat", the more null-hypothesis-like choice would be to think "door opener chooses a random door which this time happened to have a goat", in which case you get a probability of 1/2 instead of 1/3. Maybe this all boils down to what the definition of probability really is. Is it, or can it be, a measure of your ignorance? If it isn't, then how can ignorance of the algorithm mean that the probability can't change from 1/3? (What about a Bayesian approach with various priors for possible door-opening strategies?) Maybe this all boils down to the question of what constitutes information in this context. I agree that the statement "at least one door contains a goat" has no new information. But in practice the door opener opens a particular door, and maybe that action communicates some information: "door B contains a goat" is not the same statement. --Joshua Zucker
Here's a particular take using Bayesian analysis: Let A be the event that the car is behind door 1, and let B be the event that that Monty Hall opens a door with a goat. As pointed out we know that P(A) = 1/3. The question is: what is P(A|B)? Using Bayes' law: P(A|B) = P(A) (P(B|A)/P(B)) and, to expand P(B) = P(B|A) P(A) + P(B|~A) P(~A) Let a = P(B|A), b = P(B|~A). We know is that 0 <= a, b <= 1. Since there is just one car, we must have a = 1 So P(A|B) = 1/(1 + 2b). The standard solution of this problem posits that b=1, thus giving the answer of 1/3. However, another possibility is that Monty chooses a door at random in which case b = 1/2, yielding the answer of 1/2. So the correct answer depends on what procedure Monty uses (which we don't know). Victor On Wed, Oct 28, 2009 at 11:42 PM, Joshua Zucker <joshua.zucker@gmail.com> wrote:
On Wed, Oct 28, 2009 at 4:09 PM, Dan Asimov <dasimov@earthlink.net> wrote:
The reasoning is simple: After the initial guess, the two remaining doors play identical roles with respect to what the player knows. The player knows that at least one of them hides a goat. So when a goat is shown, that gives the player no new information about the original guess.
This is true regardless of what algorithm may be used to choose the goat door, as long as the player doesn't know the algorithm.
I agree that if some statement is made that is equivalent to "at least one of the two doors contains a goat", such as the revealing of a goat by someone who knows which door is which and always shows a goat, then there's no new information.
But if someone randomly opens a door, and it contains a goat, then you have information like "door B contains a goat" ... quite a different thing.
The problem is what you can deduce from a statement like "door B contains a goat". If you know the algorithm by which door B was chosen, then I think we all agree that there are many possibilities for how that impacts the probability of door A containing a goat (still 1/3, or 1/2, or perhaps you can even get 0 or 1 from some of the situations discussed earlier).
But what should you do if you don't know the algorithm, and don't know the knowledge of the door opener? It seems to me that rather than assuming "door opener knows what is in all the doors and always reveals a goat", the more null-hypothesis-like choice would be to think "door opener chooses a random door which this time happened to have a goat", in which case you get a probability of 1/2 instead of 1/3.
Maybe this all boils down to what the definition of probability really is. Is it, or can it be, a measure of your ignorance? If it isn't, then how can ignorance of the algorithm mean that the probability can't change from 1/3? (What about a Bayesian approach with various priors for possible door-opening strategies?)
Maybe this all boils down to the question of what constitutes information in this context. I agree that the statement "at least one door contains a goat" has no new information. But in practice the door opener opens a particular door, and maybe that action communicates some information: "door B contains a goat" is not the same statement.
--Joshua Zucker
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
participants (3)
-
Dan Asimov -
Joshua Zucker -
victor miller