[math-fun] minimax polynomials
Does anyone know of any references for finding the minimiax polynomial approximation to x^N on some interval that has degree k < N? Thanks, J.P. Grossman
Does anyone know of any references for finding the minimiax polynomial approximation to x^N on some interval that has degree k < N?
Google for Remez algorithm. And maybe "equal ripple". I wrote one in Macsyma for Livermore, but I forget which book I cribbed from. The idea is brilliantly simple. It's a quadratically convergent iteration that alternately equalizes amplitudes at the estimated turning points, and then recomputes those turning points. It works equally well for rational functions, and my Macsyma program will try to determine coefficients in just about any smooth approximating expression. But in practice, as soon as you get a little bit fancy, locating, and even counting the turning points gets hairy, and you need some way to give the program advice. E.g., your optimal solution may have more ripples than degrees of freedom, the surplus of which are of lesser amplitude. This can even happen with polynomial approximations if the coefficents are interdependent. Also, you typically need multiprecision numerics. (And, of course, symbolic differentiation and Newton's method or the like.) --rwg ARTHROSCOPE PROTHORACES CRAPSHOOTER
Chebyshev approximation will get exact coefficients for what I think you are specifying. It should be just a few lines of code, rather than Remez algorithm which is certainly more complex. RJF
-----Original Message----- From: R. William Gosper [mailto:rwg@osots.com] Sent: Friday, October 27, 2006 12:42 AM To: math-fun@mailman.xmission.com Subject: Re: [math-fun] minimax polynomials
Does anyone know of any references for finding the minimiax polynomial approximation to x^N on some interval that has degree k < N?
jpg>Does anyone know of any references for finding the minimiax
polynomial approximation to x^N on some interval that has degree k < N?
I said
Google for Remez algorithm. And maybe "equal ripple". I wrote one in Macsyma for Livermore, but I forget which book I cribbed from. The idea is brilliantly simple. It's a quadratically convergent iteration that alternately equalizes amplitudes at the estimated turning points, And endpoints and then recomputes those turning points.
What's cool is it doesn't try to minimize anything--just equalize ripple. And you can have a fairly arbitrary weight function.
It works equally well for rational functions, and my Macsyma program will try to determine coefficients in just about any smooth approximating expression. But in practice, as soon as you get a little bit fancy, locating, and even counting the turning points gets hairy, and you need some way to give the program advice. E.g., your optimal solution may have more ripples than degrees of freedom, the surplus of which are of lesser amplitude. This can even happen with polynomial approximations if the coefficents are interdependent.
E.g., suppose for x^N you're minimaxing the *relative* error and one of your endpoints is 0. Then a0 = 0 and you've lost a degree of freedom. Another possiblity is undetermined terms in a continued fraction.
Also, you typically need multiprecision numerics. (And, of course, symbolic differentiation and Newton's method or the like.)
For a case like x^N, the Remez equations are all algebraic. Minimaxing absolute error in the cubic approximating x^4 on [0,1], I got a mess of complex solutions, plus two reals. One was x^4-T[4](x)/8 as you'd expect, since Chebychevs are equal-ripple approximations to 0. This turned out minimax on [-1,1], with amplitude |mu| = 1/8, and extrema -1,-1/sqrt2, 0,1/sqrt(2), and 1. But much better (on [0,1]) was x^4 ~ 2x^3 - 5x^2/4 + x/4 - 1/128, with |mu| = 1/128 and extrema 0, (2-sqrt2)/4, 1/2, (2+sqrt2)/4, and 1. There were also nonsense solutions with superposed turning points, and turning points outside [0,1]. For larger N, you'll probably need to solve numerically and Recognize the algebraics. --rwg TRIG TABLE LITTERBAG
I just found out today that Paul Halmos died, on October 2. I understand there's an obituary in the New York Times, and there is also one on the MAA web page: http://www.maa.org/news/100306halmos.html. -- Stan Isaacs 210 East Meadow Drive Palo Alto, CA 94306 stan@isaacs.com
I said
For a case like x^N, the Remez equations are all algebraic. Minimaxing absolute error in the cubic approximating x^4 on [0,1], I got a mess of complex solutions, plus two reals. One was x^4-T[4](x)/8 as you'd expect, since Chebychevs are equal-ripple approximations to 0. This turned out minimax on [-1,1], with amplitude |mu| = 1/8, and extrema -1,-1/sqrt2, 0,1/sqrt(2), and 1. But much better (on [0,1]) was x^4 ~ 2x^3 - 5x^2/4 + x/4 - 1/128, with |mu| = 1/128 and extrema 0, (2-sqrt2)/4, 1/2, (2+sqrt2)/4, and 1.
Privately, Rich Fateman said
Chebyshev approximations can be used on any interval by a linear transformation.
Indeed, 2x^3 - 5x^2/4 + x/4 - 1/128 is just x^4-T[4](2x-1)/128 . But Chebychevs only work once. I.e., you can't Tcheb off the 2x^3 term and get an equal-ripple quadratic approximation to x^4. Nor can you introduce weight functions, nor relative errors. But Tchebs are useful to Remezing in two different ways: Their turning points are good first approximations to Remez turning points, and likewise their cofficients, either for initializing the iterative algorithm, or the direct Newton's attack on the whole system. --rwg PAST DUE UPDATES (speaking of which: Is there really no PolynomialDiscriminant in Mma? It's just Resultant[p,p',...].)
participants (4)
-
J.P. Grossman -
R. William Gosper -
Richard Fateman -
Stan E. Isaacs