Working on 'et' (my escape-time #fractal program) this morning.
I added #supersampling presets from 1x1 through 8x8, selectable by keypress. It already did supersampling but you had to enter 3 numbers in a modal dialog.
The calculations of the array used for progressive image rendering were annoyingly slow, so I'm now caching them on disk using the xdg-basedir and vector-mmap packages (et is written in #Haskell). I also need the directory and filepath packages for this part.
The arrays get quite big, almost 700MB total for all supersampling sizes with a base resolution of 1024x576 pixels (it takes up 6 bytes per pixel). Changing the base resolution size means recalculating and storing another set of arrays. Cache will bloat!
So I'm thinking there might be a better procedural way to do what the arrays do, which is #Adam7 style interlacing (coarse pixels getting gradually finer in a multipass rendering) with the pixels in each pass ordered so that the center gets rendered first and it spreads out towards the edges.
So in short I need a cheap function from (ImageWidth, ImageHeight, PixelIndex) to (PixelX, PixelY, PixelWidth, PixelHeight), because using anything other than #atomics to increment PixelIndex is much too slow in #concurrent #multicore Haskell.
I re-enabled "ignore isolated glitches" for non-progressive rendering (as used in the CLI renderer, but not the GTK GUI). This speeds up high resolution perturbation rendering at the cost of inaccuracy (hopefully invisible after downscaling).
Theoretical worst case is 1/4 of pixels being wrong, but that is highly unlikely in practice.