Sorry if you've heard this before, but recent threads here inescapably rekindled this rant. It is exciting and empowering to learn to compute famous functions for yourself. Stranded on a desert island it's arguably more useful to know how your lost calculator could have computed square roots than knowing how to fondle the UI of the latest iPhone app. As denizens of the burgeoning technosphere, being bereft of our cultural computing heritage renders us much like dogs sniffing at an ATM. Newton's method provides a wonderful excuse to convey invigorating insights. Alas Sir Isaac's method is usually presented almost as a kind of side-show hat trick, perhaps rigorously dissected, clearly clever, but missing some vital spark of motivation and burdened with distracting allusions irrelevant to enabling a broader contemporary application and appreciation. I suggest that root-finding is a historical red herring that ought to be suppressed from the main narrative in modern presentations. Of course zeros are inextricably entwined with structural topics like the Fundamental Theorem of Algebra, but they are confusing and non-essential in the context of Newton's method, which is, at heart, about computational procedures. We might instead present Newton's method motivated top-down like this: "Given x, we wish to compute the value of some function f(x). However we may not have a way to conveniently compute f(x) directly. Therefore we try to construct another function g(y), composed from pieces we can more readily compute, with the property that iterates of g(y) will converge on a fixpoint equal to f(x)." Note zeros and root-finding don't arise; the focus is on computation, composition (both in the construction of g and in its iteration) and convergence on fixpoints (which connects it to au courant dynamics etc). Interesting developments proceed from there: We can derive g's familiar "classical Newton transform" Ng explicitly by stipulating that g() is "sufficiently well-approximated" by the first two terms of its Taylor series and solving the expansion of the fixpoint constraint y = g(y). Further observations and explorations include the following: * As N can be easily inverted ('cause z/dz = 1/d ln z) EVERY expression when iterated is the "Newton's method" for SOME function. * The reason iterating some expressions so magically converges whilest others suck goes back to that assumption of "sufficiently well-approximated" * We can compare iterations of g, gg, ggg... iterations of the transformed function Ng, NgNg, NgNgNg... and iterations of the transformation Ng, NNg, NNNg etc and seek limiting fixpoints for all these, along with interesting special identities (NgNg=NNg) etc. * We can consider higher-order expansions--solving the quadratic Taylor approximation requires square roots, so it's useless for SQRT, but could be interesting for CBRT etc, or for transcendentals (eg find log given exp). * We can consider solving other expansions, say Lambert or Fourier series, continued fractions, etc. All with nary a zero on the horizon...