Keith F. Lynch :
I had to raise e to the power of various square matrices.
I decided that the best way to do so, given the small amount of
computer power available in those days, was to divide the matrix by 2^n, add the identity matrix to it, then square it n times.
The standard algorithm (IIRC given on the man page of the "bc" shell command, as example of a function definition and/or maybe definition of bc's exp function) is to divide by 2^n until the norm is less than some h > 0, then use a short Taylor expansion (I don't recall which order but not order 1 as you suggest, e^x = 1+x+o(x), but a few more terms) and then square n times. Using a higher order in the Taylor's expansion allows to get the same accuracy with a smaller n, i.e., less "final squarings" during which the error made in the series will amplify. (BTW the same idea applies to computation of other transcendental functions like cos etc. with of course the analog but different formula expressing f(x) in terms of f(x/2) e.g., cos(x) = 2 cos²(x/2) - 1 as opposed to exp(x) = (exp(x/2))². The initial division by 2^n can be seen as a rudimentary "pre-conditioning" in the jargon of numerical analysis.) But that raises the question of what n should be. If I recall correctly,
I had it try lots of values of n, and choose the value that was, in some sense, closest to those generated by n-1 and n+1
That amounts to the choice of the h > 0 in the above algorithm. You can determine the required h "analytically" so that the final result is (theoretically) accurate to the machine epsilon, depending on the chosen order of the Taylor's expansion. [If you truncate the series at order N then h ~ (eps)^(1/N).] You'd probably want to choose that value, since if you stop earlier, there will be a systematic error (rest in the Taylor series) in addition to the rounding errors, but if you make it smaller, the final term(s) of your Taylor series won't contribute to the series, but a larger number of squarings in the end will amplify the rounding error made there. All of that gives nice exercises for courses on numerical analysis or scientific computing etc. :-) - Maximilian