[math-fun] convergence speed of continued fractions
WDS: Usually you can find the answer by viewing CFs as infinite product of 2x2 matrices, and considering the eigenvalues of those matrices in the limit. RWG:S Warren, before wading into your generous followup, I have a stupid question: Scaling a matrix scales the eigenvalues, but doesn't change the value of the C.F. What gives? --scaling matrix does not change the eigenvalue RATIO (2x2 matrix, so unique ratio) which is what matters. OK, since it was not clear enough, let me explain a bit more. (As far as I know I'm the one who invented this, but quite likely not.) A CF (continued fraction) can also be viewed as a product of 2x2 matrices because of Wallis's recurrence relations among the truncated-CF numerators & denominators. Any linear recurrence relation is viewable as a matrix product, times an initial-state vector. The final result vector is then used to compute the final numerator & denominator of the (fully computed) CF. Now. If the CF-matrices happened to all be the SAME, then this matrix product would in fact be merely a matrix POWER, and we could thus express everything (all CF n-deck truncations, as function of n) in closed form, and the matrix nth power is expressible using its two eigenvalue powers. If you multiply a high matrix power times a vector M^n v then what generically happens is, the result is L^n w + tiny error where L is the eigenvalue with max absolute value, and w is the component of v inthat eigenvector's direction (well, if the two eigenvectors happen to be orthogonal) and the error is for large n relatively exponentially small, with shrink factor proportional to the eigenvalue ratio. Some slightly funnier stuff can happen in nongeneric cases, and with non-orthogonal eigenvectors, though... really one should use Schur vectors, which are orthogonal... Now for GENERAL CFs where the matrices are not all the same, often the Nth matrix is a "smooth" function of N and the eigenvectors and eigenvalues change slowly enough as N-->infinity, that it is sufficiently approximately valid to approximate the matrix product via the eigenvalue product. (One can prove this in any particular case by bounding the errors.) Then the convergence-error is proportional to the product of the small eigenvalues so the asymptotic eigenvalue RATIO is what governs the convergence speed. More generally a long time ago in a paper I never finished, I discovered my own version of a lot of Gosperian matrix-product stuff, and had a general purpose convergence theorem in that paper related to matrix products generally via asymptotic eigenvalue connection.
participants (1)
-
Warren D Smith