[math-fun] simultaneous approximation
On Dec 8, 2002, Gene Salamin asked If x is irrational, and I wish to best approximate the ratio x:1 by integers a:b, then I would use continued fractions. Suppose x and y are irrational, and I wish to best approximate the ratio x:y:1 by integers a:b:c. Is it known how to do this? Dan Asimov suggested some possible definitions of "best" based on projective plane ideas. Mike Stay suggested Depends what you mean by 'best'. For two irrationals, you might want to look into cubic continued fractions. Here are two papers on bifurcating continued fractions that give good approximations; the determinant of three consecutive approximations is 1 (whereas normal CF's give +/-1), so in that sense they're optimal. (arxiv.org/pdf/math.GM/0002227 and /0008060) David Bailey remarked This is just the integer relation problem for n = 3. The integer relation problem for n = 2 is solvable by the standard Euclidean algorithm. For n = 3 one can use PSLQ, or suitably modified application of LLL. Gene continued with If irrational x is approximated by rational p/r, and irrational y is approximated by rational q/r, then a hand-waving argument using information conservation shows that there should be lots of approximations good to order r^(-3/2). If we did not require the same r in both denominators, then it is known from continued fraction theory that we could do better, namely r^(-2). Suppose x and y are given with n bits each, for a total of 2n bits. Then integers p, q, r will have (2/3)n bits each. If the fractions are to be good to n bits, then the error is of order r^(-3/2). Is this conjecture known to be true (or false)? which seems to be the last message in the discussion. I have a couple of comments. I don't understand DHB's remarks about the Integer Relation Problem. I've always taken the IRP to be looking for linear combinations of several given numbers (such as pi, e, and gamma) with integer coefficients, that are close to 0, or close to 0 (mod 1). Gene's problem is different, since he's essentially asking for only one number, a "common denominator" to use for two approximations. I don't know of an LLL version that solves this problem, but I'm no expert on LLL. It's certainly a latticey kind of problem. Gene's order r^(-3/2) conjecture is Theorem 200 of Hardy & Wright, An Introduction to the Theory of Numbers. For dimension N, assuming at least one of Xi is irrational, there are infinitely many solutions where R*(X1,X2,...,Xn) (mod 1) is within 1/R^(1/N) of (0,0,...,0) in each coordinate. Setting N=2 and dividing by R gives R^(-3/2). The proof idea is to divide the unit cube into subcubes of side 1/[R^(1/N)]; some subcube must contain two multiples of the X vector. There's a fine point here worth mentioning: We can find Rs such that the distance-of-multiple-to-integer is bounded by 1/R^(1/N), and distance of Xi to numerator/R is bounded by 1/R^(1+1/N). But if we choose a denominator limit Rmax, there's no guarantee of a denominator R<=Rmax that is within 1/Rmax^(1+1/N). The best we can get is 1/(R*Rmax^(1/N)). A 1-dimensional example: Suppose X is irrational with decimal expansion 1/10 + 1/10^10 + ...; one of Liouville's numbers would serve. Let Rmax=10^6. Look at abs(X-P/R) with R<=Rmax. Can we make it < Rmax^-2 = 10^-12? No! Unless R is a multiple of 10, |RX-P|>.1-.0001; if 10|R, then P/R reduces to 1/10, and abs(X-1/10)>10^-10. --- The following algorithm gives a reasonable method for finding good denominators, but it has some pitfalls I'll sketch at the end. It's a 2-dimensional generalization of one of my pre-LLL relation- finding methods. X,Y are irrational. We want multiples N(X,Y) (mod 1) close to 0,0. Our starting square is (-1/3,1/3)^2 with area 4/9. Find, by trial, four multipliers N++, N+-, N-+, and N-- such that N++ * (X,Y) mod 1 lies in the subsquare (0,1/3)x(0,1/3), N+- * (X,Y) mod 1 lies in the subsquare (0,1/3)x(-1/3,0), N+- * (X,Y) mod 1 lies in the subsquare (-1/3,0)x(0,1/3), and N-- * (X,Y) mod 1 lies in the subsquare (-1/3,0)x(-1/3,0). We want a multiple in each quadrant, not too far from either axis. Let Nmax = max(N++,N+-,N-+,N--). (The multipliers Nxx are all >0.) List all multiples of (X,Y) (mod 1) up to Nmax*(X,Y) that fall in the four subsquares. Each subsquare has at least one multiple. Within each subsquare, select the multiple Mi that's "best". Probably we'll use the metric of max(abs(MX(mod 1)), abs(MY(mod 1))), choosing the point whose distance to the further axis is minimal. Other reasonable definitions of "best" are also possible. We've started with 1/3 as the worst error of our approximations, and we can shrink it down a little. We're assuming X and Y are irrational, so no multiple actually has a coordinate that's +-1/3 (mod 1). Reduce the size of the square accordingly to accommodate the actual best multiples. (All 4 subsquares must be the same size.) Discard any multiples that fall outside the reduced square; at least one multiple must still remain in each quadrant. As a side effect, we can produce a sequence of increasingly good approximations, for all multiples <=Nmax. Now I sketch an iterative step, which is repeated indefinitely, to increase Nmax while reducing the size of the square containing the good approximations. At each stage, the square contains all the good multipliers <=Nmax. We select the largest multiplier within each quadrant, and use the four corresponding multiples as base points. Let Nmin = minimum of the four multipliers. For each base point, try adding each good multiple (in all quadrants) to the base point. Keep any new multiple that's within the square. (There might be a lot, or maybe nothing new.) We now have a (probably larger) list of multiples that lie within the square, and is complete up to Nmax+Nmin. This becomes the new Nmax. We discard multiples larger than this limit; we'll refind them later. Now we try to shrink the square: within each quadrant, choose the best point; choose the worst of these four points; if this point is new, we can shrink the square to just include it. This will probably discard some multiples that fall outside the reduced square. This concludes the iterative step. The stopping rule is when some multiple is close enough to 0,0. Each step of the iteration is guaranteed to increase Nmax, but the increment could be tiny. There's no guarantee that an individual step will shrink the square, but the size must ultimately approach 0 since multiples of (X,Y) are dense[*] in the unit square. The practical problems of the algorithm are similar to using a subtraction algorithm to compute GCDs: You can get "nearly stuck", developing a lot of multiples that fit in the square but leave one quadrant making no progress. Perhaps there's a way to improve the method that's similar to using division for GCDs. The startup step can also be a problem. It's easy in the typical case. Some bad cases: if one of X,Y is small, say X=.0001..., then there are a lot of multiples of X to record before reaching .6666. [*] Also, the density argument fails when there's an integer relation between X and Y, when multiples of (X,Y) fall on a rational-slope line. So if X=pi-3 and Y=2X, no multiples of (X,Y) fall in the 2nd and 4th quadrants (mod 1, residues normalized to lie in (-.5,.5)). Rich rcs@cs.arizona.edu
participants (1)
-
Richard Schroeppel