Yes, I've played with a version of this idea over the years, but have never gotten a satisfactory algorithm out of it, yet. The idea is an old one, but I don't recall off the top of my head what the name usually attached to this is. One of the problems is that the factor (x-r)/(x-r'), for constants r~r', but r/=r', doesn't act as much like "1" as one might like. So this "shielding" of a factor that you're sufficiently close to doesn't work as well as one might hope. From an algorithmic efficiency point of view, you're better off finding the root you're already close to, because convergence is quadratic. Then, you divide out that root so you have a smaller polynomial to work with for the next step. At 03:43 PM 9/15/2010, Allan Wechsler wrote:
Henry's post got me thinking in a different direction, maybe not the one he had intended. I was thinking about Newton's method and the problem that it is vulnerable to instabilities; even when it works perfectly and converges fast, it gets preoccupied with one root and ignores the others unless you start it over.
So I started wondering whether anybody had played with a sort of adaptive version of Newton's algorithm. While one "thread" is converging on a root, a "kibitzer" thread divides the original function by x-r, as if r were the actual root, and gets a modified function that can be the host for another round of Newton's algorithm that would be scared away from the first root and be more likely to converge on the others. Perhaps I'm barking up a squirrel-less tree, but I was imagining different threads converging on different roots, and dividing out by each other's approximations in order to "isolate" their own target root.
Has anyone heard of an approach like this?
On Wed, Sep 15, 2010 at 6:14 PM, <rcs@xmission.com> wrote:
This suggests a meta-problem: If you find your numerical algorithm is converging slowly, can you introduce an extra dimension, perhaps imaginary, to speed things up?
Rich
-------------
Quoting Henry Baker <hbaker1@pipeline.com>:
Has anyone seen the following treatment of the real cubic?
A cubic equation with 3 real zeros can be reduced to either (your choice):
4*c^3-3*c=c3 or 4*s^3-3*s=-s3
where c,c3 reminds us of cos(a),cos(3*a), and s,s3 reminds us of sin(b),sin(3*b), respectively.
Now Newton's method _can_ be used to compute one of the zeros, but an unfortunate choice of starting point will attempt to converge on a double (or almost double) root, thereby seriously slowing the convergence.
For example, Newton on the cosine version gives
c' = (c3+8*c^3)/[3*(2*c-1)*(2*c+1)]
which leads to very unfortunate behavior for c near +- 1/2.
But if we recognize that we are attempting to solve the 1-dimensional projection of a 2-dimensional problem, then we can avoid this convergence problem.
Specifically, if we are given 4*c^3-3*c=c3, then we first compute s3=sqrt(1-c3^2), and then instead solve the problem
z^3 = z3 = = c3+i*s3 = (c+i*s)^3 = c3+i*s3 = c^3 - 3*c*s^2 - i*(s^3 - 3*s*c^2) = c^3 - 3*c*(1-c^2) - i*(s^3 - 3*s*(1-s^2)) = c^3 - 3*c +3*c^3 - i*(s^3 - 3*s + 3*s^3) = 4*c^3 - 3*c - i*(4*s^3 - 3*s) = c3 - i*(-s3) = c3 + i*s3
But Newton's method on the complex problem z^3=z3 is much more robust, so long as we stay away from z=0:
z' = (z3+2*z^3)/(3*z^2)
(Throw away s=imagpart(z) & keep c=realpart(z) for the answer. But before doing so, we can trivially produce the other two roots with complex multiplications: w*z and w^2*z, where w is cube root of unity: w^3=1.)
But we know that |z|^2 must be 1, so we can easily avoid the region near z=0.
On modern hardware with multiple floating point units, complex arithmetic may have no more latency than real arithmetic, so this method may be no slower, and (due to faster convergence) may actually be faster.