In article <1910.2573-9011-346628596-1178452362@seznam.cz>, =?us-ascii?Q?charlie=20chernohorsky?= <endlessoblivion@seznam.cz> writes:
Richard wrote:
I've always envisioned that the samples should always be on a rectangular grid aligned with the coordinate axes of the complex plane. Any distortion or rotation should be done by a display process and not by the sampling process. ... Think of it like texture mapping. Your texture is always defined in a rectangular sample grid. Using the texture coordinates you can stretch and rotate it anyway you want.
But when you change the order to sampling / rotation, you don't get the same (pixel-by-pixel) picture, do you?
That's true, you might not get the same pixel exact picture, but that's because the parallelogram-based sampling is introducing the distortion into the sampling. Rendering fractals is essentially texture mapping, just that the texels are computed procedurally instead of done by an artist in a paint program. If you drive sampling of a source texture from the screen space coordinates of a distorted quadrilateral (i.e. a parallelogram or quadrilateral rotated with respect to the texture), then you introduce aliasing between screen space and texture space. With textures they solve this with mipmapping and bilinear filtering and using texture coordinates to drive sampling of the texture. Tilt the quadrilateral in perspective and you need homogeneous texture coordinates in order to undo the perspective distortion. Fractint currently always samples directly from screen space, probably because of its limited memory heritage. However if you sample the fractal in a non-distorted space, then you can create mipmaps and do all the same distortion eliminating tricks that texture mapping uses.
Think of it like of doing inversion. If you reorder actions to sampling / inversion, you get a *very* sparse image.
Inversion is something completely different. First, for a parallelogram or rotated zoom box, this is a linear transformation (although a parallelogram represents a shear transformation, its still representable by a matrix). An inversion is a non-linear transformation and doesn't preserve areas or angles. Still, even an inversion transformation is analytical enough that you can define the source region that corresponds to the screen space region of a pixel. You can then appropriately sample the corresponding source region (i.e. the complex plane) so that you can filter it to obtain the appropriate screen space "iteration". (It wouldn't really be a single iteration then, but an average of all the iterations in the source region that correspond to the screen pixel.) Whether or not this would be feasible or reasonable to do is an interesting question. The point is though, that just because an image is "different" from whatever fractint renders right now, doesn't mean that the new image is "wrong". Fractint does all kinds of stuff all over the place that can introduce small errors in the resulting image in deference to speed. For instance, the symmetry speedups introduce small errors when pixels don't exactly straddle the axes. I don't hear anyone complaining about that. Solid guessing also introduces errors in plenty of fractal types. Alternate algorithms generate different images of the same fractal -- the DEM algorithm was created specifically to highlight the dendritic filaments of the M-set that don't show up in typical escape time algorithms because the filaments are thinner than a pixel. Even the idea that a pixel only represents a single sample in the complex plane introduces aliasing and errors. -- "The Direct3D Graphics Pipeline" -- DirectX 9 draft available for download <http://www.xmission.com/~legalize/book/download/index.html> Legalize Adulthood! <http://blogs.xmission.com/legalize/>