With the 5th-degree equation: x(x² + a)(x² + b) = c x, a, b, c real numbers > 0 c > ab what is the fastest method to get an approximated value of x? (very few steps) Christian.
Sorry, some mistakes in my previous hypothesis. With the 5th-degree equation: x(x² + a)(x² + b) = c x, a, b, c real numbers a, b > 0 c > (1+a)(1+b) x > 1 what is the fastest method to get an approximated value of x? (very few steps) Christian.
On 2/2/07, Christian Boyer <cboyer@club-internet.fr> wrote:
Sorry, some mistakes in my previous hypothesis.
With the 5th-degree equation:
x(x² + a)(x² + b) = c
x, a, b, c real numbers a, b > 0 c > (1+a)(1+b) x > 1
what is the fastest method to get an approximated value of x? (very few steps)
How about iterating x -> c/((x^2 + a) (x^2 + b))? Nope, that seems awful. (In general, it seems to lead to a cycle of period 2, instead of converging, or maybe I made a mistake in my calculations.) OK, then, how about iterating x -> sqrt( (c/(x*(x^2 + a)) - b)? Nope, that leads to complex values of x, unless maybe I made a mistake in my calculations. Hm, this is harder than I thought it would be! OK, I think I'd just resort to plain old Newton's method then. Let f(x) = x*(x^2 + a)(x^2 + b) - c, and map x -> x - f(x)/f'(x). A dozen or two iterations seems to be enough ... --Joshua Zucker
You need to specify to what accuracy you want the approximation. Usually a good method to begin working from is plain old schoolroom secant method. If you wanted very high accuracy, it might be worth looking at switching to a higher-order method once the iteration is well under way. However, choosing a suitable iteration is the easy part --- trickier is finding a good starting point in the first place. In this case it would help to have bounds on the constants a,b,c if possible: for example, if we knew they were close to unity, or alternatively quite large, it would be easier to make an intelligent initial guess. Fred Lunnon On 2/2/07, Christian Boyer <cboyer@club-internet.fr> wrote:
Sorry, some mistakes in my previous hypothesis.
With the 5th-degree equation:
x(x² + a)(x² + b) = c
x, a, b, c real numbers a, b > 0 c > (1+a)(1+b) x > 1
what is the fastest method to get an approximated value of x? (very few steps)
Christian.
About x(x² + a)(x² + b) = c Thanks Joshua and Fred for your first answers. I know these old methods. By sometimes better suitable methods exist depending on the equation to analyze. a, b, c can greatly vary, but are large numbers. Not close to 1. b is approx. of the same size than a, say: a < b < 12a. And because x(x²+a)(x²+b) is a growing function, we are sure that the solution x exists, is real, and is unique. Franck, you are perfectly right, the key is to find a good starting point. What would be a good starting point? Christian.
Let's assume you're looking for moderate accuracy, say 10 sig figs. If a,b >> 1, but c < 4ab or so, then x ~ 1 and x ~ c/ab give good starting values for the secant method: say 10 sig figs in 10-25 iterations. When c is larger, as Gene points out, x ~ c^(1/5) : of course, you need a separate iteration just to get this starting value. Even then, the secant method is useless until the two function values are roughly the same size --- until you get to that state, you will probably have to use binary subdivision as he suggests. Given that situation, it may well be faster to forget about fifth roots altogether: use subdivision in reverse to find your starting values in the first place, e.g. start from x = 1 and iterate x := 2*x until your function changes sign. Then binary subdivision, then secant. Newton's method is slower than secant, unless you can compute the derivative easily from the function: the trouble is that it costs you two evaluations to get order 2 convergence, so effectively its order is only 1.4142 compared to the secant's 1.6180. If you could pin down the ranges of a,b,c sufficiently, you might be able to use series reversion to express x as a convergent series in a,b,c, or rather some functions of these which << 1; you might even try Pad\'e approximation in this context. Then refine via iteration as before. Fred Lunnon On 2/2/07, Christian Boyer <cboyer@club-internet.fr> wrote:
About x(x² + a)(x² + b) = c
Thanks Joshua and Fred for your first answers. I know these old methods. By sometimes better suitable methods exist depending on the equation to analyze.
a, b, c can greatly vary, but are large numbers. Not close to 1. b is approx. of the same size than a, say: a < b < 12a.
And because x(x²+a)(x²+b) is a growing function, we are sure that the solution x exists, is real, and is unique.
Franck, you are perfectly right, the key is to find a good starting point. What would be a good starting point?
Another technique effective if the parameters a,b,c vary systematically, is to get good starting values by extrapolation. E.g. suppose c is stepping up by constant increments in an inner loop, while a,b fixed, and the previous solutions were x_{i-1}, x_{i-2}, .... Then x_{i-1} and x_{i-1} + 2(x_{i-1} - x_{i-2}) are good starters for secant iteration; or x_{i-1} + (x_{i-1} - x_{i-2}) for Newton. Obviously once the loop is underway, the approach could be extended to use higher order methods: difference tables, Newton interpolation, Pad\'e approximation etc. Intelligently applied in a continuous mechanical simulation requiring only low accuracy, this approach often requires just a single iteration to refine the initial guess! Fred Lunnon
I'll have to eat my words here, not for the first time. First off, secant method is absolutely dreadful, at any rate for this equation. [The literature ignores it completely, and now I know why --- I must just have been lucky earlier.] Secondly, a good starting value for iteration seems always to be min( c/(ab), c^(1/5) ) pace Gene --- it doesn't matter about the fifth root, because that of course needs computing only to low precision! Finally, an absolutely cracking iterative method of which I was previously ignorant is Halley third-order --- with function and derivatives for this case f0 = f(x) = x*(x^2+a)*(x^2+b) - c, f1 = df/dx = 5x^4 + 3(a+b)x^2 + ab, f2 = d^2f/dx^2 = 20x^3 + 6(a+b)x, we iterate x := x - f0 f1 / (f1^2 - f0 f2 / 2) ); getting at least 10 sig figs in 2 or maybe 3 iterations. By the way, the most awkward parameter set, where the curvature is maximal, is typified by a = 16.0, b = 16.0, c = 2560.0. And note that the problem can be reduced to two parameters by the substitution x' = x/sqrt(a), a' = 1, b' = b/a, c' = c/a^(5/2). I found Halley in L. Volpi "Non Linear Equations", on web at http://www.digilander.libero.it/foxes/poly/Polynomial_zeros.pdf He claims it is more "stable" than Newton, and backs this up with a discussion of the basins of attraction. Inter alia, he also surveys a wide selection of other methods, most of which --- like those in journal articles --- are more concerned with finding _all_ roots of a polynomial. Fred Lunnon
Thanks Fred for your help. I didn't know this Halley's method, and it works very well for this equation, in very few steps. Excellent!!! Your link doesn't work. The correct one is without www: http://digilander.libero.it/foxes/poly/Polynomial_zeros.pdf And a summary here: http://mathworld.wolfram.com/HalleysMethod.html Christian.
I fiddled around with this equation while on the airplane today with Maxima. If you want to do this, you should be aware of Maxima's "allroots" function, which is capable of quickly solving numerically quintic polynomials. Maxima also has a "realroots" function, which works, but doesn't tell you what is happening to the complex roots. Also, Maxima has the capability to plot the various roots, after you have stripped them into real & imaginary parts. If you're just trying to understand what's going on with this equation, the restriction to x>1 and c>(1+a)(1+b) isn't necessary. If c=(1+a)(1+b), then x=1 is always a solution, so you can then focus on what's happening with the complex roots. For some equations, this simplification might cause trouble, but for this equation, nothing particularly different happens when c gets somewhat larger. As Gene has already pointed out, there should be only one real root to this equation, with the complex roots in conjugate pairs. The only possibility for more real roots is if the derivative of the equation has real roots. But this would require that sqrt(9a^2-2ab+9b^2)-3(a+b) > 0 which never happens when a,b>0. So the derivative equation has all non-real roots, which then implies that the quintic has only one real root. (We could probably also have used Sturm sequences to figure this out, but I haven't figured out how to get Maxima to do them yet.) When c=(1+a)(1+b), and b=a, and a=1, we have x^5-1, which has the 5 roots in the unit circle. Allowing a to get bigger (even _much_ bigger), the "circle" of roots gets bigger and more ovoid, but the behavior doesn't change very much. When b starts differing from a, we get a slightly different ovoid shape, but the behavior still doesn't change very much. So, in answer to your original question about initial approximations to the real solution, we should keep the following things in mind: 1. The sum of all the roots is zero, since the coefficient of x^4 is zero. So the roots surround the origin. 2. The product of all the roots is c, which gives some idea of the size of the mushed/ovoid "circle". However, when c=(1+a)(1+b), x=1 is always a solution, so the position of the real solution may not get large even when c is very large. So, my recommendation is to start any approximation for a real root at x=1 ! Newton's convergence is not so hot around multiple roots. Cyclotomic polynomials can look like multiple roots from far away, but we seem to be starting close enough that Newton has its usual quadratic vigor, so the convergence is very fast. There are more complex iterations that attempt to approximate _all_ of the roots "simultaneously", starting from an initial approximation to each root. I believe Henrici studied some of these methods. However, I almost always found these methods to be too much trouble, and not really "simultaneous" at all. The reason is that they only start working well when at least one root is approximated very well, and then the others come "into view" essentially one by one (or as conjugate pairs). So you do almost as well by finding the roots one by one (or by conjugate pairs), and then "deflating" the polynomial by dividing out the known factor. There is also a classical Newton method for approximating a conjugate pair of roots simultaneously; it guesses a quadratic factor for the polynomial & iterates. I did a quick Google search to find the name of this algorithm, but wasn't able to find it. If anyone cares, I'll do a deeper search. I think that it appeared in one of the early ACM publications. P.S. I also tried experiments with a general expression for a quintic with 1 real and 2 pairs of conjugate complex roots and equated coefficients with this particular quintic. I was able to algebraically eliminate the many of the variables, and got a relatively compact algebraic relationship among the rest. This is fortuituous, as a very large expression factored as the eighth (where did eight come from ??) power of a much smaller expression. However, it didn't provide much insight. At 03:23 AM 2/2/2007, Christian Boyer wrote:
Sorry, some mistakes in my previous hypothesis.
With the 5th-degree equation:
x(x² + a)(x² + b) = c
x, a, b, c real numbers a, b > 0 c > (1+a)(1+b) x > 1
what is the fastest method to get an approximated value of x? (very few steps)
Christian.
On Tuesday 13 February 2007 20:38, Henry Baker wrote:
There is also a classical Newton method for approximating a conjugate pair of roots simultaneously; it guesses a quadratic factor for the polynomial & iterates. I did a quick Google search to find the name of this algorithm, but wasn't able to find it. If anyone cares, I'll do a deeper search. I think that it appeared in one of the early ACM publications.
Sounds like Bairstow's method. It's in "Numerical Recipes" under the name "qroot". (This should not be taken as an endorsement of "Numerical Recipes".) -- g
participants (5)
-
Christian Boyer -
Fred lunnon -
Gareth McCaughan -
Henry Baker -
Joshua Zucker