In article <00a501c7c76f$28df5250$1402a8c0@eyx2>, "Edwin Young" <edwin@bathysphere.org> writes:
All graphics cards I'm aware of only provide single-precision math. This suggests that you wouldn't want to do away with CPU-based rendering altogether - you'd still need it to do double-, extended-, or arbitrary-precision math. (Though there might be some nifty way to apply the GPU to calculate an arbitrary-precision number, I suppose.)
Agreed.
There's also a distinction between using GPU hardware to display the image, and using it to calculate the iterations in parallel - you could do either without using the other.
Agreed. -- "The Direct3D Graphics Pipeline" -- DirectX 9 draft available for download <http://www.xmission.com/~legalize/book/download/index.html> Legalize Adulthood! <http://blogs.xmission.com/legalize/>