Dan writes: "(To me, this makes sense only if [the friend tosses the coin n times and you toss it n' times] in each of infinitely many trials; correctly simulating a probabilistic law in only finitely many tosses doesn't seem well-defined to me.)" This is the standard frequentist view: probability only makes sense as asymptotic frequency. "But I'm not seeing right away how that solves the problem of reverse-engineering Galton boards." The "dithering" scheme I described in my last post on the coin-tossing problem gives an iterative method for choosing target probability distributions at the second-to-last, third-to-last, ..., second, and first levels of the Galton board; from those distributions, one can compute the required bias at each junction. Yes, that's horribly cryptic, and I apologize; I wouldn't even understand the above if I hadn't been the one to write it. I'd try to write something clearer, but I have to give my kids breakfast now. Jim On 6/23/12, Dan Asimov <dasimov@earthlink.net> wrote:
Re the 11:55 PM post (at bottom), all I can think of is that if all transition probabilities are the same 2-vector: (horizontal, vertical) = (p,q) = (p,1-p), then the coefficients of the polynomial Q(t) := (p+qt)^n are the striking probabilities along the diagonal i+j = n.
(More generally, if the transition probabilities for each point on the diagonal D_k = {(i,j) | i+j = k} are all the same 2-vector (p_k, q_k) for 0 <= k < n, then the striking probabilities on D_n are the coefficients of Q(t) := Prod_{0 <= k < n} (p_k + q_k*t).)
Re the 1:11 AM post (immediately below): If each time a biased coin with Prob(H) = p is tossed n times, you are told the number #(H;n) of heads, then a statistically indistinguishable simulation of n' <= n tosses of the coin would be to toss, n' times, a biased coin having Prob(H) = #(H;n)/n.
(To me, this makes sense only if [the friend tosses the coin n times and you toss it n' times] in each of infinitely many trials; correctly simulating a probabilistic law in only finitely many tosses doesn't seem well-defined to me.)
But I'm not seeing right away how that solves the problem of reverse-engineering Galton boards.
On 2012-06-23, at 1:11 AM, Jim <jamespropp@gmail.com> wrote:
Let me phrase the puzzle in a more (math-)fun way. A friend of yours has a biased coin, but she won't tell you what the bias p is ("Hey, I'm not THAT good a friend!"). She tosses the coin n times and tells you how many times it came up heads (call it X), and then she gives you a positive integer n'<n. Your job is to use n, n', and X, along with some supplemental randomness, to generate a number X' that is statistically indistinguishable from the (random) number of heads you would get if you tossed your friend's coin n' times.
Note that you do not have access to your friend's coin; that is to say, you don't know its bias. Nonetheless, you can effectively simulate n' tosses of your friend's coin if she does you the favor of tossing it n times, where n'<n, and telling you how many heads she observed.
Note that X' need not be independent of X. All we require is that X' is governed by the binomial distribution Binomial(p,n') provided that X is governed by the binomial distribution Binomial(p,n).
I stress that the point of the puzzle is to find a way to do this that doesn't require knowledge of p.
By the way, what's the quickest way to see that you CAN'T accomplish the above if n' is bigger than n?
On Fri, Jun 22, 2012 at 11:55 PM, Jim <jamespropp@gmail.com> wrote:
I found the answer I was looking for. It doesn't involve any optimization; it just involves algebra (mostly linear algebra), and it's the Right Thing to Do.
I'll pose the key idea in the form of a puzzle: Find the linear map that sends probability distributions on {0,1,...,n} to probability distributions on {0,1,...,n-1} with the property that for every p, the map sends the distribution Binomial(p,n) to the distribution Binomial(p,n-1). (Here we treat probability distributions as vectors in the obvious way.)
Did any of you already know that such a "one size fits all" linear map exists? A priori you might think that you'd need to know p.
Jim Propp
On 6/12/12, James Propp <jamespropp@gmail.com> wrote:
Dan Asimov rightly points out that I should say what I mean by a Galton board, since some of you may not have heard of it.
We feed a marble into a router at the top of the device (call the router A1) that randomly routes a marble either down and left (to a router we'll call B1) or down and right (to a router we'll call B2). B1 routes marbles randomly to either C1 or C2, while B2 routes marbles randomly to either C2 or C3. And so on. The nth router in some row routes marbles either to the left (to the nth router in the next row) or to the right (to the n+1st router in the next row).
If this is still unclear, let me know, and I'll try to include an ASCII art sketch or find an image and explanation on the web. Note that another word for a Galton board is a quincunx. A more suggestive and apt name would be "That thingy that they have at science museums that's supposed to demonstrate Gaussian distributions (but sometimes doesn't because balls' momentum can lead to fatter tails)".
Jim Propp
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun
_______________________________________________ math-fun mailing list math-fun@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/math-fun