The posterior is P(x) = x^h (1-x)^t. Taking the log and differentiating, we see that it peaks at x = p = h/n, where n = h+t. Expanding about x = p, log P(x) = constant - (1/2) (n/pq) (x-p)^2, where q = t/n = 1-p. For h,t >> 1, P(x) is approximately Gaussian with mean p, variance n/pq, and standard deviation s = sqrt(n/pq) = 2 sqrt(n) if we're fussing about a nearly fair coin. For a fairness decision, we need two parameters k and e (and the opinions enter via their choice). We can say that the coin is (k,e)-fair if the center band of the Gaussian, the interval [p-ks, p+ks] is contained within the interval [1/2-e, 1/2+e], (k,e)-unfair if these intervals are disjoint, and otherwise undecided. The larger is k or the smaller is e, the more stringent is the test, and the longer it must run. Remark: If a coin of known heads probability p is tossed n times, the mean number of heads is np and the standard deviation is sqrt(npq). Did I make a mistake when I said s = sqrt(n/pq)? Well, its not quite the same situation. The direct probability involves a binomial coefficient, while the Bayesian posterior does not. The law of the iterated logarithm states that the limsup of the deviation from the mean measured in standard deviations is sqrt(2 log log n). If we have the bad luck to take the posterior at one of these peak deviations, our test could give a false result. Once a large deviation occurs, there is no "restoring force" to undo it; it sticks. Does LIL undermine the utility of Bayes theorem? Bayes theorem uses only the final posterior. Is there extra information to be gleaned from earlier posteriors? I need to think about it. Bayes doesn't guarantee certainty, just a known confidence limit, so perhaps LIL is one of the ways it can occasionally fail. As a practical matter, LIL may not even intrude into any feasible certification test. A popular criterion is 3-sigma. If sqrt(2 log log n) = 3, n = 1.2e39. -- Gene