The f(q) vs a(q) curves look a bit crisper with this method.
And calculating D(q) is easy, just
$$D = (a * Q - f) / (Q - 1)$$
I ported the linear regression to OpenCL too, now only input and output needs to run on the CPU, GPU does all the calculations.
Some of the test histograms I'm using give bizarro curves though, not sure what's up - bizarro with two different algorithms means it's probably (?) a real behaviour?
Added EXR import to my #MultifractalSpectrum calculator. Hacked an option into #Fractorium command line renderer to output raw histogram data to EXR. Ran a batch process to render and analyze some of the examples that come with Fractorium (in its Data/Bench directory).
Better multifractal spectrum graphs for the fractal flame upthread. The main difference is that the Q range is higher, and more of it is calculated in OpenCL.
Still need to move the simple linear regression code into OpenCL and get it working on GPU (the OpenCL works perfectly fine on CPU, but on my GPU it outputs garbage (different on each run) - even the initial image mipmap reduction stage fails).
Computation time with 1024 Q steps is 23mins on CPU, input histogram image resolution 7280x7552x4 f32.
@KnowPresent the other graph I just posted is increasing, but at least it curves downwards a bit...
Another multifractal spectrum of the same fractal flame as before. This time it's f(alpha) vs alpha instead of D(Q) vs Q.
A cluster of lines start near (1.5, 1.4) and get closer to each other as they increase (progressively more slowly) towards (2,2).
I ran some further benchmarks. At higher resolutions (8k^2) it takes 2mins, while fractorium takes 5 seconds. The improvement in appearance of my method at low iteration counts can be outweighed by instead iterating points for another 1m55s - I got 666M iterations per second in one test, 1m55s at 8k^2 gives another 1k samples per pixel.
My way became slower because I needed to subdivide into finer bands and oversample in the filters to avoid posterization artifacts from the quantisation...
I was getting isolated frames of random colours, think I fixed it now, the problem might have been starting the search for gamma midpoint between the low and high cutoffs instead of the entire range.
Mirror repeat palette looks much better than clamping.
that and the points on the bézier curves between the points on the bézier curves between my key frames are probably going out of range... I think I'll make the palette texture repeat (mirror repeat might be nice) instead of clamping to the edge
hm doesn't work as well as I'd hoped. getting some weird artifacts too, maybe my histogram bins are too far apart...
A simple "auto white balance" algorithm is to stretch each RGB channel so that the bottom x% and top 100-x% of pixels are clipped off at 0 and 1. x is typically 0.6% (0.006). But that can still give a strong colour tint across the whole image, depending on the distribution of values in between.
I'm experimenting with adding automatic per-channel gamma adjustment, to push the 50% threshold of pixels to 0.5. The formula I'm trying is:
g = log (0.5) / (log(mid - lo) - log(hi - lo))
then for each channel of each pixel
out= pow((in - lo) / (hi - lo), g)
Using linear to sRGB conversion makes this appear quite light and desaturated, so 0.25 instead of 0.5 may be more satisfying.
Takes about 1-2 seconds to do the DE for a typical 2048x1024 image.
Much faster than my previous attempts which were O(dim^2) per pixel, or O(dim^4) for the whole image (totally impractical for non-tiny images).
I've written a blog post about it which will be published on Saturday.
Experiment: set target colour dependent on output coordinates instead of transformation id. Makes sort of a fractal blur of the palette image.
This one is using NASA's April Blue Marble Next Generation
#OpenCL device fission / device partition works fine on #CPU with #PoCL (I can get it to use only 15 of my 16 cores/threads if I like) but on #AMD #GPU the call to clCreateSubDevices just returns "invalid value", which I guess means "not supported". I was hoping to leave 1 compute unit free in the hope that it wouldn't make my desktop environment completely unusable for the duration of the computations.
I implemented motion blur with a power law for weighting time in the shutter-open interval (for directional effect). Can also make the shutter much longer than the frame time for the interesting trail effects.
The video upthread was using this image as a 2D palette:
but the auto-white-balance part of the code seems to have made it more purple. The hivis is still vaguely visible.
making art with maths and algorithms
Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.