If you've wanted to understand the conjugate gradient (CG) method (for solving sparse symmetric matrix Ax=b problems), and you haven't seen this paper before, you're in for a treat: http://www.cs.cmu.edu/~quake-papers/painless-conjugate-gradient.pdf --- Question: If you carefully follow the CG algorithm, you're iteratively building up a real polynomial p_i(z) such that p_i(A) 'approximates' the matrix inverse 1/A ~ p_n(A). If CG goes to the end, the polynomial is related to the characteristic polynomial q(z): q(A) = 0 <=> q(A)-q(0)I = -q(0)I <=> (q(A)-q(0)I)/A = -q(0) (1/A) <=> (q(0)-q(z))/z/q(0) | z=A == (1/A) (i.e., the polynomial (q(0)-q(z))/z/q(0), evaluated at z=A.) (Note that q(0)=det(A)/=0.) What relation do these 'partial polynomials' of CG have to the characteristic polynomial of A ? Are these related to derivatives of the characteristic poly? Are these related to Sturm sequences/Pade approximations to the characteristic poly? Note that the particular CG 'partial polynomials' are dependent upon the right-hand side vector b, except when CG runs to the end, in which case the vector b drops out, leaving only the characteristic polynomial without any dependency upon b. Perhaps CG is a good (the best?) way to find the characteristic poly of a sparse matrix?