The f(q) vs a(q) curves look a bit crisper with this method.

And calculating D(q) is easy, just
$$D = (a * Q - f) / (Q - 1)$$

I ported the linear regression to OpenCL too, now only input and output needs to run on the CPU, GPU does all the calculations.

Some of the test histograms I'm using give bizarro curves though, not sure what's up - bizarro with two different algorithms means it's probably (?) a real behaviour?

Show thread

doi.org/10.1103/PhysRevLett.62

> "Direct determination of the f(α) singularity spectrum" Ashvin Chhabra and Roderick V. Jensen Phys. Rev. Lett. 62, 1327 – Published 20 March 1989

Show thread

Added EXR import to my calculator. Hacked an option into command line renderer to output raw histogram data to EXR. Ran a batch process to render and analyze some of the examples that come with Fractorium (in its Data/Bench directory).

golubaja_rippingfrominside_complexcode.flame

Show thread

Better multifractal spectrum graphs for the fractal flame upthread. The main difference is that the Q range is higher, and more of it is calculated in OpenCL.

Still need to move the simple linear regression code into OpenCL and get it working on GPU (the OpenCL works perfectly fine on CPU, but on my GPU it outputs garbage (different on each run) - even the initial image mipmap reduction stage fails).

Computation time with 1024 Q steps is 23mins on CPU, input histogram image resolution 7280x7552x4 f32.

Show thread

Another multifractal spectrum of the same fractal flame as before. This time it's f(alpha) vs alpha instead of D(Q) vs Q.

A cluster of lines start near (1.5, 1.4) and get closer to each other as they increase (progressively more slowly) towards (2,2).

Show thread

Wrote a little OpenCL thing to compute multifractal spectrum. Fed it with the histogram (exported from a patched version of Fractorium) from a fractal flame by C-91 (the colours here are slightly different, I experimented with auto-white-balance post-processing stuff).

I ran some further benchmarks. At higher resolutions (8k^2) it takes 2mins, while fractorium takes 5 seconds. The improvement in appearance of my method at low iteration counts can be outweighed by instead iterating points for another 1m55s - I got 666M iterations per second in one test, 1m55s at 8k^2 gives another 1k samples per pixel.

My way became slower because I needed to subdivide into finer bands and oversample in the filters to avoid posterization artifacts from the quantisation...

Show thread
claude boosted

I was getting isolated frames of random colours, think I fixed it now, the problem might have been starting the search for gamma midpoint between the low and high cutoffs instead of the entire range.

Mirror repeat palette looks much better than clamping.

Show thread

that and the points on the bézier curves between the points on the bézier curves between my key frames are probably going out of range... I think I'll make the palette texture repeat (mirror repeat might be nice) instead of clamping to the edge

Show thread

hm doesn't work as well as I'd hoped. getting some weird artifacts too, maybe my histogram bins are too far apart...

Show thread

A simple "auto white balance" algorithm is to stretch each RGB channel so that the bottom x% and top 100-x% of pixels are clipped off at 0 and 1. x is typically 0.6% (0.006). But that can still give a strong colour tint across the whole image, depending on the distribution of values in between.

I'm experimenting with adding automatic per-channel gamma adjustment, to push the 50% threshold of pixels to 0.5. The formula I'm trying is:

$$
g = log (0.5) / (log(mid - lo) - log(hi - lo))
$$

then for each channel of each pixel

$$
out= pow((in - lo) / (hi - lo), g)
$$

Using linear to sRGB conversion makes this appear quite light and desaturated, so 0.25 instead of 0.5 may be more satisfying.

I got (constant work per pixel) for working!

Takes about 1-2 seconds to do the DE for a typical 2048x1024 image.

Much faster than my previous attempts which were O(dim^2) per pixel, or O(dim^4) for the whole image (totally impractical for non-tiny images).

I've written a blog post about it which will be published on Saturday.

Experiment: set target colour dependent on output coordinates instead of transformation id. Makes sort of a fractal blur of the palette image.

This one is using NASA's April Blue Marble Next Generation
visibleearth.nasa.gov/images/7

Show thread

device fission / device partition works fine on with (I can get it to use only 15 of my 16 cores/threads if I like) but on the call to clCreateSubDevices just returns "invalid value", which I guess means "not supported". I was hoping to leave 1 compute unit free in the hope that it wouldn't make my desktop environment completely unusable for the duration of the computations.

I implemented motion blur with a power law for weighting time in the shutter-open interval (for directional effect). Can also make the shutter much longer than the frame time for the interesting trail effects.

The video upthread was using this image as a 2D palette:

ichef.bbci.co.uk/news/660/cpsp

but the auto-white-balance part of the code seems to have made it more purple. The hivis is still vaguely visible.

Show thread

I switched to ulong accumulation buffer, with 48.16 linear-light fixed point - should be safe enough against overflow and it has native atomics instead of having to do a cmpxchg loop with uint/float unions.

I need to check if AMD have implemented OpenCL device fission yet, because without that it hogs my desktop session making it unusable (ssh -X sessions work fine from outside, interestingly).

Show thread
Show more
post.lurk.org

Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.