[math-fun] Newton meets Cayley-Hamilton
The following has probably been proposed by many undergraduates learning matrix algebra for the first time, but I've never seen it mentioned before, and I couldn't find it with either Google or Scholar.Google. Newton's famous quadratically convergent method for finding roots of f(x) (so long as the root isn't a multiple root): x_(n+1) = x_n - f(x_n)/f'(x_n) Cayley-Hamilton: a square matrix M satisfies its own characteristic polynomial p(x): p(M)=0 Suppose now that we have a *diagonal matrix* E_n of *eigenvalue approximations* for M, where these approximations are 'sufficiently close' to the (distinct) eigenvalues of M. Then we can improve E_n with: E_(n+1) = E_n - p(E_n) (1/p'(E_n)) where (1/p'(E_n)) is the inverse of the matrix produced by substituting E_n into the derivative of the characteristic polynomial p(x). This works because Cayley-Hamilton doesn't care very much *which* matrix having this characteristic polynomial that you use. ----- I'm not claiming any great efficiency from this observation, but there may be some applications. ----- I'd love to find an analogous method which 'diagonalizes' M -> E_n, even if it is 'obviously' too expensive. (Expensive algorithms have a way of becoming cheaper due to Moore's Law, increasing core counts, and cleverer implementations.)
participants (1)
-
Henry Baker