[math-fun] reverse-engineering biased Galton boards
Suppose we want to create a biased (n-1)-level Galton board so that the distribution of the balls at the bottom will be governed by some specific probability vector (p_1,p_2,...,p_n). This is a massively underdetermined problem; there are lots of ways to choose biases at the n(n-1)/2 junctions so as to generate this distribution at the nth level. But suppose we have the additional goal of minimizing the effect that slight perturbations of the biases will have. Is there a unique best choice of biases, and if so, what is it? Here we're dealing with a map from R^{n(n-1)/2} to R^n, so it's not clear to me what the right way to measure sensitivity to perturbations is. But I suspect that this problem has a nice canonical answer if one chooses some natural way to measure sensitivity. Example: Take n=3, with (p_1,p_2,p_3) = (1/3,1/3,1/3). Let the bias at the top of the Galton board be q:1-q, and let the biases at the second level be r:1-r and s:1-s. We want (*) qr=1/3, q(1-r)+(1-q)s=1/3, and (1-q)(1-s)=1/3. This has a one-parameter family of solutions r=1/(3q) and s=1-1/(3-3q) (but we won't actually use this fact explicitly in the analysis appearing below). Suppose we have such a solution q,r,s. Now consider the perturbed solution q',r',s' with q'=q+x, r'=r+y, s'=s+z. The difference between the perturbed vector (q'r',q'(1-r')+(1-q')s',(1-q')(1-s')) and the vector (1/3,1/3,1/3), up to first order, is v(x,y,z) := (rx+qy,(1-r-s)x-qy+(1-q)z,(s-1)x+(q-1)z). If we now replace x,y,z by independent Gaussian random variables X,Y,Z with mean 0 and variance 1, we find that the expected squared magnitude of v(X,Y,Z) is r^2+q^2+(1-r-s)^2+q^2+(1-q)^2+(s-1)^2+(q-1)^2, which equals (**) 4-4q-2r-4s+4q^2+2r^2+2s^2+2rs. If instead of variance 1 we use variance epsilon, that just multiplies the expected squared magnitude of d(X,Y,Z) by a factor of epsilon. So it's natural to try to minimize the expression (**) subject to the constraints (*). Use Lagrange multipliers L, M, N: Setting the partial derivatives of 4-4q-2r-4s+4q^2+2r^2+2s^2+2rs with respect to q, r, s, L, M, and N equal to 0, we obtain six equations, three of which are (*) and three of which are L(r) + M(1-r-s) + N(s-1) = -4+8q L(q) - M(q) = -2+4r+2s M(1-q) - N(1-q) = -4+2r+4s When Maple solves this system, it gives only one solution with q, r, and s positive, namely (q,r,s) = (1/2,2/3,1/3). I'm betting that more generally there's some unique way to minimize sensitivity to error, and that the biases that minimize sensitivity to error are rational functions of p_1,...,p_n. Any ideas, anyone? I'm open to other candidate definitions of what it means to minimize sensitivity; I just picked the one that seemed most natural to me. Jim Propp
participants (1)
-
James Propp