In article <46966AA7.18255.3719B6@twegner.swbell.net>, "Tim Wegner" <twegner@swbell.net> writes:
However, I have some questions. The fundamental function of fractint is to produce 2D pixel arrays.
Actually the code in fractint produces 2D *iteration* arrays. The colors are only obtained by pumping the iteration arrays through a color map.
Besides the afor-mentioned 3D function that is really a post process (e.g. it acts on regular 2D images) there are a few other "3D" functions such as the julibrot (greyscale distance rendering) and red/blue 3D orbits. But with these possibleexceptions, fundamentally fractint writes to a pixel buffer, and needs access to already-written pixels to do things like solid guessing and boundary tracing.
Solid guessing and boundary tracing are screen decomposition algorithms. Sometimes you want them for the effects they create on the final image -- for instance, its well known that boundary tracing and solid guessing can introduce artifacts (i.e. different renderings) from brute force iteration. Solid guessing can miss islands of escaped orbits and boundary tracing can miss some things too. Can solid guessing and boundary tracing be recast to operate entirely on the GPU? I'm not sure, but there's nothing that says you can't keep doing them on the CPU and upload the resulting iteration array into the graphics card. Existing OpenGL-based fractal programs have been doing this for a decade or more. However, I'm willing to bet that if you have "brute force" rendering in the GPU running in real-time and boundary tracing running on the CPU relatively slow, that most people are going to explore in GPU accelerated mode and then switch to boundary tracing to see how it looks if they want the boundary tracing effect. Similarly, solid guessing was an algorithm created as a speedup because it was slow to brute force compute regions on the CPU. If you can compute brute force renderings on the GPU in real-time, would you care about solid guessing? Maybe only when you're deep zoomed beyond the GPU accelerated range.
If you think just of 2D fractals, what is the fastest/most effective way of writing and reading pixels?
The fastest way will *always* be through the GPU on modern hardware (and by "modern" I mean anything since like 5 years ago). Anything you do through any API is going through the GPU; you're better off working directly with the GPU than going through some other layer. OpenGL is the fasted platform-agnostic way to get at the GPU.
Can opengl write and display 2D fractals efficiently?
Yes, trivially so.
How does it compare to other pixel rendering methods in terms of speed?
Other methods look like slugs.
The other concern is just getting the literal port done, working, and
The port is actually quite trivial because currently the iteration array is kept in the driver and copied to the screen. You would just copy the iteration array to a texture and draw a quadrilateral instead.
debugged, and releasing someone that would compete in the fractint fanatics hearts and minds with the DOS version.
As I've said before, if people want DOS, they can always use an old version. Keeping "DOS-ness" is liability, not an asset.
But if opengl has effective pixel reading and writing,
You don't need reading; just writing. For things that need to read the iteration array, you just have a CPU-side copy of that iteration array. This isn't DOS, we have plenty of memory.
rather than later. But I would still suggest post-poning the rewrite of 3D for a second stage after the literal port.
Its not really a rewrite with the exception of the code that draws 3D fractals, of which there are only iterated dynamic systems as a type that draws true 3D.
A final thought is I am not sure the 3D post processing function even deserves to be in the program, but 3D fractals that are directly generated by orbits, only in 3-space like julibrots and 3D orbit fractals really do belong. The reason is that other specialized programs (e.g. POVRay) do the post processing much better than fractint.
Have you looked at what GPUs are doing these days? Real-time raytracing on the GPU is getting pretty common, although you still have to bend the raytracing algorithm somewhat to fit into the GPU architecture. GPUs are growing in performance *faster* than Moore's law, doubling every 9-12 months instead of every 18. You can still output 3D models to POVray/etc. for rendering in your favorite "offline" renderer, but handling the 3D natively in FractInt is actually going to make the existing code *cleaner* and *simpler* than what is there currently, since you can dispsense with the whole chunk of code that's contorting itself backwards end-over-end to do software 3D rendering. -- "The Direct3D Graphics Pipeline" -- DirectX 9 draft available for download <http://www.xmission.com/~legalize/book/download/index.html> Legalize Adulthood! <http://blogs.xmission.com/legalize/>