Just one note on rendering using the GPU. All graphics cards I'm aware of only provide single-precision math. This suggests that you wouldn't want to do away with CPU-based rendering altogether - you'd still need it to do double-, extended-, or arbitrary-precision math. (Though there might be some nifty way to apply the GPU to calculate an arbitrary-precision number, I suppose.) There's also a distinction between using GPU hardware to display the image, and using it to calculate the iterations in parallel - you could do either without using the other. (For example use a shader function to render the fractal into an offscreen texture, then display it using plain old X or Win32.) For display purposes, it probably doesn't matter what you do - anything which isn't horrifyingly inefficient would do, since actually calculating the fractal is going to be a whole lot more expensive than blasting the results on to the screen. ----- Original Message ----- From: "Richard" <legalize@xmission.com> To: "Fractint developer's list" <fractdev@mailman.xmission.com> Sent: Sunday, July 15, 2007 12:47 PM Subject: ****SPAM**** Re: [Fractdev] the future of FractInt is OpenGL
In article <469A1ABC.10284.43339D@twegner.swbell.net>, "Tim Wegner" <twegner@swbell.net> writes:
Just to say it again: Rich does not speak for anyone other than himself
Just to say it again: Tim and Jonathan don't speak for me.
Really, what's the point of such a statement?
When have I ever claimed that I spoke for someone else?
Is there some implied threat in reminding people that I don't speak for you or Jonathan?
You guys are free to do whatever you want with the code, as am I. That's what open source is all about. I don't need anyone's permission or blessings to change the code.
You have been free to modernize and improve the code all along; many of the improvements I've made to improve understanding the code have nothing to do with being freed from the restraints of the DOS memory model. Using symbolic constants instead of magic numbers. Breaking long functions into smaller functions with intention revealing names. Extracting duplicate code into functions instead of repeating it inline. Changing variable names to make them more revealing of their intention. Nothing in the DOS world was preventing those improvements from being made, yet they remained undone.
We started this discussion of modernization about ten years ago, and while everyone agreed with the goals then, nothing was done about it. In 1999 I contributed the majority of the source needed to move the code to where we are currently, in terms of functionality. The contribution languished. Since then, not much has happened to the main code except for the occasional bug fix patch. You can't even *compile* the DOS code without ancient compilers that noone can get anymore.
Tim says I have strong opinions as if that's a bad thing. What kind of spineless person has an opinion without defending it? I'm backing my "strong opinions" up with actual work on modernizing the code and dragging it kicking and screaming into 1990s and then into the 21st century. I'll also be happy to back my opinions up with data, my nearly 30 years of experience writing software and my 20+ years of experience in computer graphics. My "strong opinions" are conventional wisdom in the rest of the graphics and software engineering world; its only on this list where people react like I'm saying the earth is flat.
Its evolve or die. The long, slow, languishing, death bed scene has been going on for about 10 years. I'm moving ahead with no apologies. -- "The Direct3D Graphics Pipeline" -- DirectX 9 draft available for download <http://www.xmission.com/~legalize/book/download/index.html>
Legalize Adulthood! <http://blogs.xmission.com/legalize/>
_______________________________________________ Fractdev mailing list Fractdev@mailman.xmission.com http://mailman.xmission.com/cgi-bin/mailman/listinfo/fractdev