Here's a worked example of reverse-engineering a Galton board: Suppose the target distribution is (.1,.2,.3,.4). That is, we want this to be the distribution of "mass" at the bottom of the Galton board. Here's a picture that may be helpful, best viewed in fixed font (is there a good way to imbed fixed-font text in postings sent via gmail?): A / \ / \ / \ B C / \ / \ / \ / \ / \ / \ D E F / \ / \ / \ / \ / \ / \ / \ / \ / \ G H I J We have mass 1/10 at G, mass 2/10 at H, mass 3/10 at I, and mass 4/10 at J. We let the mass flow upward through the diagram in such a way that mass is conserved at each vertex (so that mass .1+.2+.3+.4=1 arrives at A), with the property that the way the mass splits as it goes up is governed by specific ratios. If we let DG denote the amount of mass flowing upward along edge DG, then the ratios are as follows: DH:EH=1:2 EI:FI=2:1 BE:CE:1:1 The general pattern might be clearer if we introduce extra vertices denoted by "*" along the borders of the diagram and write *G:DG=0:3, DH:EH=1:2, EI:FI=2:1, FJ:*J=0:3 *D:BD=0:2, BE:CE=1:1, CF:*F=2:0 *B:AB=1:0, AC:*C=0:1 Anyway, we find the following upward flows along the edges: DG=3/30, DH=2/30, EH=4/30, EI=6/30, FI=3/30, FJ=12/30 BD=1/6, BE=1/6, CE=1/6, CF=3/6 AB=1/3, AC=2/3 Now reverse the flow, so that all the mass flows down, and choose the bias at each junction to be whatever it needs to be to give these same flows. In our case, the biases at A, B, C, D, E, and F turn out to be 1:2, 2:2, 1:3, 3:2, 2:3, and 1:4. I'm sure that's unclear, but hopefully it's unclear in a different way than my earlier postings. As I mentioned, I will be writing this up at some point... Jim On Fri, Jun 22, 2012 at 11:55 PM, James Propp <jamespropp@gmail.com> wrote:
I found the answer I was looking for. It doesn't involve any optimization; it just involves algebra (mostly linear algebra), and it's the Right Thing to Do.
I'll pose the key idea in the form of a puzzle: Find the linear map that sends probability distributions on {0,1,...,n} to probability distributions on {0,1,...,n-1} with the property that for every p, the map sends the distribution Binomial(p,n) to the distribution Binomial(p,n-1). (Here we treat probability distributions as vectors in the obvious way.)
Did any of you already know that such a "one size fits all" linear map exists? A priori you might think that you'd need to know p.
Jim Propp
On 6/12/12, James Propp <jamespropp@gmail.com> wrote:
Dan Asimov rightly points out that I should say what I mean by a Galton board, since some of you may not have heard of it.
We feed a marble into a router at the top of the device (call the router A1) that randomly routes a marble either down and left (to a router we'll call B1) or down and right (to a router we'll call B2). B1 routes marbles randomly to either C1 or C2, while B2 routes marbles randomly to either C2 or C3. And so on. The nth router in some row routes marbles either to the left (to the nth router in the next row) or to the right (to the n+1st router in the next row).
If this is still unclear, let me know, and I'll try to include an ASCII art sketch or find an image and explanation on the web. Note that another word for a Galton board is a quincunx. A more suggestive and apt name would be "That thingy that they have at science museums that's supposed to demonstrate Gaussian distributions (but sometimes doesn't because balls' momentum can lead to fatter tails)".
Jim Propp