* Allan Wechsler <acwacw@gmail.com> [Sep 16. 2010 08:09]:
Henry's post got me thinking in a different direction, maybe not the one he had intended. I was thinking about Newton's method and the problem that it is vulnerable to instabilities;
due to the nature of the basins of attraction as soon as we got at least three roots. Some people would refer to Newton's method as "root polishing" rather than "root finding". (Yes there are methods to mitigate the problem like that damping factor multiplied to the amount of change, but I assume one can always cook up an example killing any particular root finder).
even when it works perfectly and converges fast, it gets preoccupied with one root and ignores the others unless you start it over.
So I started wondering whether anybody had played with a sort of adaptive version of Newton's algorithm. While one "thread" is converging on a root, a "kibitzer" thread divides the original function by x-r, as if r were the actual root, and gets a modified function that can be the host for another round of Newton's algorithm that would be scared away from the first root and be more likely to converge on the others. Perhaps I'm barking up a squirrel-less tree, but I was imagining different threads converging on different roots, and dividing out by each other's approximations in order to "isolate" their own target root.
Has anyone heard of an approach like this?
This may not be what you have in mind but there are methods to find _all_ roots simultaneously, see e.g. http://en.wikipedia.org/wiki/Durand-Kerner_method and the references at the bottom of the page
On Wed, Sep 15, 2010 at 6:14 PM, <rcs@xmission.com> wrote:
This suggests a meta-problem: If you find your numerical algorithm is converging slowly, can you introduce an extra dimension, perhaps imaginary, to speed things up?
Rich
Ignoring the "extra dimension": there is a method (essentially delta-squared) to obtain a superlinear iteration from a linear one. See page 165 (Thm.4.3.3) of Alston S.\ Householder: {The Numerical Treatment of a Single Nonlinear Equation}, McGraw-Hill, (1970) In simple terms, if \Phi(x) is an (only linear) iteration ( i.e. you iterate x_{n+1} = \Phi(x_{n}) ), then \Phi^* defined by 2 * [\Phi(\Phi(x)) - \Phi(x)] \Phi (x) = \Phi(\Phi(x)) - ----------------------------- \Phi(\Phi(x)) - 2 \Phi(x) + x will have superlinear convergence. (Properly typeset as relation (30.6-7) on page 598 of the fxtbook, followed by a simple example).
[...]
cheers, jj