Jim, the feisty and doggedly determined debator, a worthy rhetorical adversary indeed, wrote menacingly:
An interesting and entertaining analysis, Tim!
But hang on just one minute!!!!
Oh oh, I'm under attack!!! Jim goes on, dangerously:
We agree that
log(x+iy) = (1/2)log(x^2 + y^2) + i(atan2(y,x) + 2kPi)
(I'll use "atan2(y,x)" because it's less ambiguous)
It's standard practice in mathematical libraries that when a single-valued result is required, the "principal value" is returned, which in this case is to set k=0.
Yup! We agree violently!
I'm sorry, I can't agree! atan2(0,-5) is (by ANSI definition), +PI. A value of -PI is of course "correct", but not the "principal value". It corresponds to selecting k = -1 in the log function, a strange choice indeed.
I shall take responsibility for not writing clearly. I should not have written PI when I meant "a number close to PI" since of course a computer cannot represent PI. Fractint's Log() function normalizes the output in exactly the conventional way, and since the -5 argument produces a value *exactly*. My argument is based not on a non- standard normalization of the output of log(), but that (-a*b*g*h) differs by just the tiniest bit from -5. Then the imaginary part of Log() (which would be exactly PI if the argument were exactly -5) might get nudged up a smidgen, so the log() result (after normalization) swings over to a number close to (but not exactly) - PI.
Jim's fatal mistake was when he assumed that, given a = 1, b = -5, g = 1, h = -1 that (-a*b*g*h) is -5. Yes, it's very close to -5
In 30 years of programming, I've yet to come across a system that does not produce EXACTLY -5 from these values.
I admit I am making an assumption. I cobbled up a quick test program and I couldn't make -5 differ from (-a*b*g*h) either. But keep in mind the following. Fractint in float mode, has no integer type. We're talking doubles here, and worse, complex numbers (ordered pairs of doubles). Worse, numbers are getting shuttle back and forth between 64 bit IEEE doubles and non-standard Intel 80 bit doubles. The parser is doing all kinds of complicated things. It wouldn't shock me one bit if one little ol' bit got changed in (-a*b*g*h). I will try to write out ALL the bits of the value -5 calculated both ways to see the differ (by tomorrow). You have already proven that -5 is not equal to (-a*b*g*h) in fractint. I believe I can show you that a bit changed, maybe the least significant bit. I don't think you can call THIS a bug if it's the least significant bit of a double. You only see it because log() is discontinuous, but I get ahead of myself. OK, for the sake of argument grant me (for now) that in the parser (- a*b*g*h) differs some tiny amount from -5. I believe that is the ONLY assumption you have to accept to succumb to the argument. Fractint's log() function is written in assembler that tries very hard to keep the entire calculation with all intermediate values in the coprocessor at 80 bits. This actually results in HIGHER precision and a MORE accurate result than calculating the complex in C. Using C all intermediate values are converted (truncated) to 64 bits at each step. We should therefore expect Fractint's log to get slightly different numerical results from some other Log implementation. I ask (dramatically, with a rhetorical flourish) is calculating at HIGHER precision a bug???? (At work we are considering porting many large and complex shuttle program applications from Unix to Linux on Intel. One of the BIG porting issues is that most Unix workstations (Sun, IBM) deal exclusively with 64 bit doubles, but the 80-bit Intel math coprocessor gives different results - the precision behavior changes.) So my argument in a nutshell: 1. Your bug example feeds the discontinuous complex power function the value (-5.0, 0), which happens to be RIGHT on the boundary of a discontinuity of the power function. 2. (-a*b*g*h) != -5 (you have granted me this for the sake of argument) 3. ergo, (-5 plus tiny tiny epsilon)^a number differs by a lot from (exactly -5)^ a number because of the discontinuity. The fact that one is the conjugate of the other is a numerical accident. There really is no bug in the complex power function in fractint. Because we are talking about an argument on the edge of a discontinuity, any small change in the calculation makes a big difference. I can "eliminate" the bug by replacing the all-in- assembler-mostly-at-80-bits log with a conventional all-in-C-using-64 bit-doubles log. It is no surprise that these two log implementations differ. The assemler version is argumably more accurate because the calculation as entirely (or at least mostly) done at 80 bits. I herby challenge anyone reading this to use fractint to make a graph of imaginary part((z^constant)) where the corners are such that (-5,0) is in the center of the screen. Make the color reflect the imaginary part of (z^constant). I don't believe it matters what "constant" is but it probably needs to have both a nonzero real and imaginary part.
Ball's in your court, Tim!
It's in yours now, though I admit I am asking you to grant me that (-a*b*g*h) might differ a tiny bit from -5. I have just a bit more work to do to close my case shut. I am enjoying our game of "fractal tennis", but I know I am right, you *will* be defeated!!! "Bug in Fractint", he says! Harumph! Tim