Ch.Greathouse: Let's say that we're testing a coin to see if it's unfairly biased towards heads. The null hypothesis is that it's a fair coin. Test 1 consists of flipping the coin twice and seeing if both times it comes up heads; p = 0.25.
Test 2 consists of flipping it once and seeing if it comes up heads; p = 0.5. The chance that a fair coin would fail both tests is 1/8 for a p-value of 0.125. Your formula gives 1/4 * (2*1/2 - 1/4) = 3/16 = 0.1875. Am I missing something? Charles Greathouse Analyst/Programmer Case Western Reserve University --WDS Response: this is two tests labeled by you as "test 1" and "test 2." Say they decided X<0.25 and Y<0.5. However, what I meant by "interchangeable" (and I'm sorry I was not clearer re that) was... Somebody tells you they did 2 tests, and says one of them failed like "X<0.25" and the other failed with "Y<0.5" BUT DOES NOT SAY which X and Y were. You then know that one of these two events happened: I. "X<0.25 and Y<0.5" II. "X<0.5 and Y<0.25" but do not know which. (Or they both could have occurred, since they overlap -- but you do not know whether that happened either.) Given this knowledge (and non-knowledge) by you, the p-level is 3/16. If you look in my preliminary paper, http://rangevoting.org/CombinedTestFail.html at the exact problem statement wording, I hope you will see this is the version I am speaking of. This also happens to be the problem version that is highly relevant to a lot of real life stat-testing applications, i.e. I did not pick it merely because I was evil. Is that ok?