[math-fun] Towards Newton's method a la mode.
Sorry if you've heard this before, but recent threads here inescapably rekindled this rant. It is exciting and empowering to learn to compute famous functions for yourself. Stranded on a desert island it's arguably more useful to know how your lost calculator could have computed square roots than knowing how to fondle the UI of the latest iPhone app. As denizens of the burgeoning technosphere, being bereft of our cultural computing heritage renders us much like dogs sniffing at an ATM. Newton's method provides a wonderful excuse to convey invigorating insights. Alas Sir Isaac's method is usually presented almost as a kind of side-show hat trick, perhaps rigorously dissected, clearly clever, but missing some vital spark of motivation and burdened with distracting allusions irrelevant to enabling a broader contemporary application and appreciation. I suggest that root-finding is a historical red herring that ought to be suppressed from the main narrative in modern presentations. Of course zeros are inextricably entwined with structural topics like the Fundamental Theorem of Algebra, but they are confusing and non-essential in the context of Newton's method, which is, at heart, about computational procedures. We might instead present Newton's method motivated top-down like this: "Given x, we wish to compute the value of some function f(x). However we may not have a way to conveniently compute f(x) directly. Therefore we try to construct another function g(y), composed from pieces we can more readily compute, with the property that iterates of g(y) will converge on a fixpoint equal to f(x)." Note zeros and root-finding don't arise; the focus is on computation, composition (both in the construction of g and in its iteration) and convergence on fixpoints (which connects it to au courant dynamics etc). Interesting developments proceed from there: We can derive g's familiar "classical Newton transform" Ng explicitly by stipulating that g() is "sufficiently well-approximated" by the first two terms of its Taylor series and solving the expansion of the fixpoint constraint y = g(y). Further observations and explorations include the following: * As N can be easily inverted ('cause z/dz = 1/d ln z) EVERY expression when iterated is the "Newton's method" for SOME function. * The reason iterating some expressions so magically converges whilest others suck goes back to that assumption of "sufficiently well-approximated" * We can compare iterations of g, gg, ggg... iterations of the transformed function Ng, NgNg, NgNgNg... and iterations of the transformation Ng, NNg, NNNg etc and seek limiting fixpoints for all these, along with interesting special identities (NgNg=NNg) etc. * We can consider higher-order expansions--solving the quadratic Taylor approximation requires square roots, so it's useless for SQRT, but could be interesting for CBRT etc, or for transcendentals (eg find log given exp). * We can consider solving other expansions, say Lambert or Fourier series, continued fractions, etc. All with nary a zero on the horizon...
You are correct about the Taylor series approximation. The cool thing is that it works with complex numbers, getting the direction right (or wrong, as the case may be!). However, since it is easy to overshoot, you need to be very close to the thing you're looking for. Having had lots of experience with Newton in the neighborhood of multiple roots: it sucks! You need to divide out a multiple root, either explicitly or implicitly. Many people (including me) have tried making an analogy with some sort of gravitational force field. I've never been completely successful with this analogy. Perhaps someone else has? Once you have the concept of a Newton iteration, you can extend it to all sorts of other things: power series, matrices, etc. At 01:03 PM 4/19/2011, Marc LeBrun wrote:
Sorry if you've heard this before, but recent threads here inescapably rekindled this rant.
It is exciting and empowering to learn to compute famous functions for yourself.
Stranded on a desert island it's arguably more useful to know how your lost calculator could have computed square roots than knowing how to fondle the UI of the latest iPhone app.
As denizens of the burgeoning technosphere, being bereft of our cultural computing heritage renders us much like dogs sniffing at an ATM.
Newton's method provides a wonderful excuse to convey invigorating insights.
Alas Sir Isaac's method is usually presented almost as a kind of side-show hat trick, perhaps rigorously dissected, clearly clever, but missing some vital spark of motivation and burdened with distracting allusions irrelevant to enabling a broader contemporary application and appreciation.
I suggest that root-finding is a historical red herring that ought to be suppressed from the main narrative in modern presentations. Of course zeros are inextricably entwined with structural topics like the Fundamental Theorem of Algebra, but they are confusing and non-essential in the context of Newton's method, which is, at heart, about computational procedures.
We might instead present Newton's method motivated top-down like this:
"Given x, we wish to compute the value of some function f(x). However we may not have a way to conveniently compute f(x) directly. Therefore we try to construct another function g(y), composed from pieces we can more readily compute, with the property that iterates of g(y) will converge on a fixpoint equal to f(x)."
Note zeros and root-finding don't arise; the focus is on computation, composition (both in the construction of g and in its iteration) and convergence on fixpoints (which connects it to au courant dynamics etc).
Interesting developments proceed from there: We can derive g's familiar "classical Newton transform" Ng explicitly by stipulating that g() is "sufficiently well-approximated" by the first two terms of its Taylor series and solving the expansion of the fixpoint constraint y = g(y).
Further observations and explorations include the following: * As N can be easily inverted ('cause z/dz = 1/d ln z) EVERY expression when iterated is the "Newton's method" for SOME function. * The reason iterating some expressions so magically converges whilest others suck goes back to that assumption of "sufficiently well-approximated" * We can compare iterations of g, gg, ggg... iterations of the transformed function Ng, NgNg, NgNgNg... and iterations of the transformation Ng, NNg, NNNg etc and seek limiting fixpoints for all these, along with interesting special identities (NgNg=NNg) etc. * We can consider higher-order expansions--solving the quadratic Taylor approximation requires square roots, so it's useless for SQRT, but could be interesting for CBRT etc, or for transcendentals (eg find log given exp). * We can consider solving other expansions, say Lambert or Fourier series, continued fractions, etc.
All with nary a zero on the horizon...
="Henry Baker" <hbaker1@pipeline.com>
You are correct about the Taylor series approximation.
The cool thing is that it works with complex numbers, getting the direction right (or wrong, as the case may be!).
Yes. Again, what is often overlooked is that the classical Newton transform being an invertible map from functions to functions means that iterating pretty much ANY function, no matter how wretchedly it converges, is the "Newton's method" way to compute SOME other function. Moreover, *in general* the Newton transform might just as easily make convergence WORSE as it can improve it. It isn't any sort of magic bullet; it just happens to be good for functions that are "sufficiently-well approximated" by the first few terms of their Taylor series. Not every function meets this criterion, nor does it exclude better approximations being produced by truncating somebody else's flavor of expansion.
However, since it is easy to overshoot, you need to be very close to the thing you're looking for.
Note that this statement is true of ALL functional iterations and has nothing specifically more to do with Newton's method than any other. You want to cozy into the "basin of attraction" of any fixed point you seek.
Having had lots of experience with Newton in the neighborhood of multiple roots: it sucks! You need to divide out a multiple root, either explicitly or implicitly.
Tsk tsk, such language. My theme is to advocate expressing things in terms of fixed points and get away from all this talk of roots and zeros--try it!
Many people (including me) have tried making an analogy with some sort of gravitational force field. I've never been completely successful with this analogy. Perhaps someone else has?
Sure, this can be done. To make a "flow" field (as I think RCS has called them) you "just" have to be able to express the n-th iterate f^[n] for fractional n, and an "infinitesimal generator" analogous to the derivative operator. (Strangely, once one can express non-integer iterates, it is often possible to plug virtually anything into n, for instance a complex number, but the "meaning" of this construction defies my easy interpretation).
Once you have the concept of a Newton iteration, you can extend it to all sorts of other things: power series, matrices, etc.
Right. Moreover, once you have the concept of deriving computational processes that preserve fixpoints you can go far beyond just extending the domain and co-domain. Standard Newton's method is a very special instance of a broad class of computations, and may be generalized along many facets. What's usually taught as "Newton's method" might be more precisely called something like the "singly-iterated first-order Newton-Taylor fixpoint-preserving transform". So enough with the rooting for zeros already. That's just needlessly application-specific language. Fine of itself, but beside the core ideas.
participants (2)
-
Henry Baker -
Marc LeBrun