Yet another trivial messing around with the Mandelbrot set accompanied by a long-winded (yet on-topic) intro.
At least there are only two formulae this time, rather than thirty-two. As you are no doubt all aware, one of the defining characteristics of chaotic systems is that they have a property known as "sensitive dependence on initial conditions". Two different starting states for a given system, however close to each other they may be initially, would eventually diverge and produce entirely different histories for the system's evolution. Equivalently, any inaccuracy in measuring or specifying the initial state of the system would eventually be magnified to the point that the inaccuracy dominates and the system could be in virtually any possible state. Usually, the rate of divergence is exponential - i.e. the difference between the two states is multiplied by some factor every iteration or unit of time. Measuring the rate of divergence hence provides a way of helping identify chaotic systems. This is the origin of the Lyapunov exponent, developed by Aleksandr Lyapunoc (or Liapunov or Ljapunov or whatever other spellings can result from a Cyrillic->English transliteration.) in the late nineteenth century. To avoid describing Jacobian matrices and characteristic values, I'll just give an explicit definition in the case of a one-dimensional system, where the state of the system at any instant is described by a single number: n L = lim (1/n)SUM log|df(x )/dx| n->inf i=1 i Where f is the dynamical rule in question; x_0 is the initial state; and x_1, x_2, ... are subsequent states. L is the Lyapunov exponent. Roughly it says we measure by how much each step magnifies (or shrinks) the divergence at the previous step and average these measurements over larger and larger numbers of iterations, continuing until the average settles down to a steady value. If L is less than zero, then the divergence becomes a _con_vergence and the system has a periodic cycle. If it's equal to zero, the system is described as Hamiltonian; an example would be the system x = x+1 - everything just cruises along. If L is _greater_ than zero - for many, the most interesting situation - then one has a chaotic situation. Higher-dimensional systems have several Lyapunov exponents, and determining them and seeing which are larger and which are smaller, which are positive and which are negative, one can learn quite a few things about the system's sensitivity to initial conditions. Interesting side note: When f is one-dimensional which depends on a given parameter a, then if there is a particular value for the parameter - call it a_inf - at which the behaviour of f becomes chaotic (i.e., where L becomes positive), it can be shown that log|2|/log|delta| L approx. |a - a | inf Where delta = 4.6692016.... Now that I've gone into all that detail about what the Lyapunov exponent is, you can safely ignore it, because most of it is purely background justification for my formulae. Someone else might like to implement the Lyapunov-calculation code in a Fractint formula, so that, e.g., one could have the pixel value specify the parameter to a function, and have Fractint colour that pixel according to the value of L; but all I do here is the perturbation thing. For each pixel, I start the usual quadratic map iteration (z->z^2+c) both for the pixel value itself and a perturbed version, the perturbation being specified in polar coordinates by p2. The bailout condition is simply whenever the two orbits diverge by a sufficient minimum amount, specified in p3 (which defaults to 4 if the user doesn't say otherwise). And as I found with my initial random explorations, they can be _very_ sensitive to different perturbations. Gee, all I really needed to write was that last paragraph; but that wouldn't have been anywhere _near_ as informative. Morgan L. Owens "As an aside, it's tempting to speculate that the generic quadratic behaviour of smooth functions ensured by Morse's Theorem accounts for the fact that almost all the basic laws of physics (conservation of energy, Fermat's principle, and the principle of least action, for example) are expressed as quadratic forms. Since it's both stable and 'typical' to be locally quadratic, it's reasonable to expect that the local laws of classical physics will also be expressible as quadratics, as indeed they are." Lyamand_1 { ; Version 2001 Patchlevel 8 reset=2001 type=formula formulafile=lyamand.frm formulaname=LyapunovMandel passes=1 center-mag=-0.162602/0.700584/5.464481 params=0/0/1/0.1/0/0 float=y maxiter=255 inside=255 logmap=5 periodicity=0 colors=000zzz<42>zzBzz9zz8<3>yy4yy4yx4<72>yX0yX0yW0<127>000 } LyapunovJulia_1 { ; Version 2001 Patchlevel 8 reset=2001 type=formula formulafile=lyamand.frm formulaname=LyapunovJulia center-mag=+0.46966854283928040/-0.00187656380316929/57.97101 params=-1/0/3.14/0.4/0/0 float=y maxiter=1023 inside=255 logmap=36 periodicity=0 colors=000<42>00s00t00v00w00x00z<41>0qz0rz0sz<3>0yz<9>bymfyljyk<3>zye<9>\ zyGzyEzyB<3>zy0<41>z80z70z50<2>z20z00y00<24>T00S00R00<3>L00<9>jaDleEoiF<\ 3>zyL<9>z_8zY7zV6<3>zK0<11>D50930520000 } frm:LyapunovMandel { narg=real(p2) nmag=imag(p2) bailout=4*(real(p3)==0)+real(p3)*(real(p3)!=0) z0=c0=pixel z1=c1=pixel+nmag*(cos(narg)+(0,1)*sin(narg)) : z0=z0*z0+c0 z1=z1*z1+c1 cabs(z0-z1)<bailout } frm:LyapunovJulia { narg=real(p2) nmag=imag(p2) perturb=nmag*(cos(narg)+(0,1)*sin(narg)) bailout=4*(real(p3)==0)+real(p3)*(real(p3)!=0) z0=pixel c0=p1 z1=z0+perturb c1=c0+perturb : z0=z0*z0+c0 z1=z1*z1+c1 cabs(z0-z1)<bailout }
participants (1)
-
Morgan L. Owens