Re: [math-fun] Interesting matrix problem
The discussion of noisy matrices reminded me of the following: About 40 years ago, I had to raise e to the power of various square matrices. This was as a part of a simulation of a dynamical system. Something sort of like pendulums hanging from other pendulums. I decided that the best way to do so, given the small amount of computer power available in those days, was to divide the matrix by 2^n, add the identity matrix to it, then square it n times. But that raises the question of what n should be. If I recall correctly, I had it try lots of values of n, and choose the value that was, in some sense, closest to those generated by n-1 and n+1. Was that a sound decision? What, if anything, is known about this algorithm? Is it original with me? I just tried it for a 1x1 matrix whose value is 1. The correct solution is of course e. Here's single precision: 0 2.00000000000000000000 1 2.25000000000000000000 2 2.44140625000000000000 3 2.56578445434570312500 4 2.63792848587036132812 5 2.67699003219604492188 6 2.69734191894531250000 7 2.70773959159851074219 8 2.71299004554748535156 9 2.71562004089355468750 10 2.71694374084472656250 11 2.71759772300720214844 [same value as above and below for 12 through 22] 23 2.71759772300720214844 24 1.00000000000000000000 [same value as above for 25 and forever] And double precision: 0 2.00000000000000000000 1 2.25000000000000000000 2 2.44140625000000000000 3 2.56578451395034790039 4 2.63792849736659995585 5 2.67699012937818237035 6 2.69734495256509987371 7 2.70773901968801977702 8 2.71299162425344198013 9 2.71563200016899486400 10 2.71695572946642727175 11 2.71761848233683567244 12 2.71795008118962977406 13 2.71811593626604652840 14 2.71819887772164303641 15 2.71824035193003421540 16 2.71826108990387993458 17 2.71827145910837941756 18 2.71827664376668298729 19 2.71827923611850286179 20 2.71828053227565646921 21 2.71828118036402610613 22 2.71828150439858440279 23 2.71828166642075297332 24 2.71828174742680062081 25 2.71828178793237640321 26 2.71828180818247311379 [same value as above and below for 27 through 51] 52 2.71828180818247311379 53 1.00000000000000000000 [same value as above for 54 and forever] I just tried it with a pocket calculator. Again I get duplicate values: 0 2 1 2.25 2 2.4414063 3 2.5657845 4 2.6379285 5 2.6769901 6 2.6873448 7 2.7077389 8 2.7129912 9 2.7156311 10 2.7169527 11 2.7176153 12 2.7179375 13 2.7180927 14 2.7181593 15 2.7181148 16 2.7180261 17 2.7180261 18 2.7176706 19 2.7176706 20 2.7162475 21 2.7134009 22 2.7134009 23 2.7134009 24 2.6907395 25 2.6459824 26 2.558681 27 1 28 1 For reference, here is e to 1000 places: 2.718281828459045235360287471352662497757247093699959574966967627724076630353547594571382178525166427427466391932003059921817413596629043572900334295260595630738132328627943490763233829880753195251019011573834187930702154089149934884167509244761460668082264800168477411853742345442437107539077744992069551702761838606261331384583000752044933826560297606737113200709328709127443747047230696977209310141692836819025515108657463772111252389784425056953696770785449969967946864454905987931636889230098793127736178215424999229576351482208269895193668033182528869398496465105820939239829488793320362509443117301238197068416140397019837679320683282376464804295311802328782509819455815301756717361332069811250996181881593041690351598888519345807273866738589422879228499892086805825749279610484198444363463244968487560233624827041978623209002160990235304369941849146314093431738143640546253152096183690888707016768396424378140592714563549061303107208510383750510115747704171898610687396965521267154688957035 0354 And here's the three-line C program I wrote to generate it: void main(){int j,k,l[0xBAD];for(j=0xABC;j;j--){for(k=0;k<0x3E9;k++){l[k+1]+= 5*(l[k]%j)<<1;l[k]/=j;}l[0]++;}for(j=0;j<01751;j++){printf("%d",l[j]);if(!j) printf("%c",46);}printf("\n");} I'm reminded of the problem of finding the best diameter of the pinhole in a pinhole camera (assuming you're only concerned with resolution, not with light gathering power). There the solution is simply the geometric mean between the wavelength of light and the focal length of the camera. I wonder if the solution to finding the best n is analogous. Maybe it's the n that generates the geometric mean of 1 and epsilon, where epsilon is the smallest number which can be added to 1 to get a number distinct from 1. In other words, the square root of epsilon. But with a non-trivial matrix, that will vary among the matrix elements.
Keith F. Lynch :
I had to raise e to the power of various square matrices.
I decided that the best way to do so, given the small amount of
computer power available in those days, was to divide the matrix by 2^n, add the identity matrix to it, then square it n times.
The standard algorithm (IIRC given on the man page of the "bc" shell command, as example of a function definition and/or maybe definition of bc's exp function) is to divide by 2^n until the norm is less than some h > 0, then use a short Taylor expansion (I don't recall which order but not order 1 as you suggest, e^x = 1+x+o(x), but a few more terms) and then square n times. Using a higher order in the Taylor's expansion allows to get the same accuracy with a smaller n, i.e., less "final squarings" during which the error made in the series will amplify. (BTW the same idea applies to computation of other transcendental functions like cos etc. with of course the analog but different formula expressing f(x) in terms of f(x/2) e.g., cos(x) = 2 cos²(x/2) - 1 as opposed to exp(x) = (exp(x/2))². The initial division by 2^n can be seen as a rudimentary "pre-conditioning" in the jargon of numerical analysis.) But that raises the question of what n should be. If I recall correctly,
I had it try lots of values of n, and choose the value that was, in some sense, closest to those generated by n-1 and n+1
That amounts to the choice of the h > 0 in the above algorithm. You can determine the required h "analytically" so that the final result is (theoretically) accurate to the machine epsilon, depending on the chosen order of the Taylor's expansion. [If you truncate the series at order N then h ~ (eps)^(1/N).] You'd probably want to choose that value, since if you stop earlier, there will be a systematic error (rest in the Taylor series) in addition to the rounding errors, but if you make it smaller, the final term(s) of your Taylor series won't contribute to the series, but a larger number of squarings in the end will amplify the rounding error made there. All of that gives nice exercises for courses on numerical analysis or scientific computing etc. :-) - Maximilian
participants (2)
-
Keith F. Lynch -
M F Hasler