The f(q) vs a(q) curves look a bit crisper with this method.

And calculating D(q) is easy, just
$$D = (a * Q - f) / (Q - 1)$$

I ported the linear regression to OpenCL too, now only input and output needs to run on the CPU, GPU does all the calculations.

Some of the test histograms I'm using give bizarro curves though, not sure what's up - bizarro with two different algorithms means it's probably (?) a real behaviour?

Show thread

Added EXR import to my calculator. Hacked an option into command line renderer to output raw histogram data to EXR. Ran a batch process to render and analyze some of the examples that come with Fractorium (in its Data/Bench directory).

golubaja_rippingfrominside_complexcode.flame

Show thread

Better multifractal spectrum graphs for the fractal flame upthread. The main difference is that the Q range is higher, and more of it is calculated in OpenCL.

Still need to move the simple linear regression code into OpenCL and get it working on GPU (the OpenCL works perfectly fine on CPU, but on my GPU it outputs garbage (different on each run) - even the initial image mipmap reduction stage fails).

Computation time with 1024 Q steps is 23mins on CPU, input histogram image resolution 7280x7552x4 f32.

Show thread

Another multifractal spectrum of the same fractal flame as before. This time it's f(alpha) vs alpha instead of D(Q) vs Q.

A cluster of lines start near (1.5, 1.4) and get closer to each other as they increase (progressively more slowly) towards (2,2).

Show thread

Wrote a little OpenCL thing to compute multifractal spectrum. Fed it with the histogram (exported from a patched version of Fractorium) from a fractal flame by C-91 (the colours here are slightly different, I experimented with auto-white-balance post-processing stuff).

I got (constant work per pixel) for working!

Takes about 1-2 seconds to do the DE for a typical 2048x1024 image.

Much faster than my previous attempts which were O(dim^2) per pixel, or O(dim^4) for the whole image (totally impractical for non-tiny images).

I've written a blog post about it which will be published on Saturday.

Experiment: set target colour dependent on output coordinates instead of transformation id. Makes sort of a fractal blur of the palette image.

This one is using NASA's April Blue Marble Next Generation
visibleearth.nasa.gov/images/7

Show thread

Ported my renderer to . Even on the is way too slow to be useful, it's quicker to get the same visual quality by plotting a zillion points.

The attached has around 8k samples per pixel, taking around 1min/frame (1 hour total). 256k subframes for motion blur, each being a single path of 16k iterations. Only plain simple stuff without xaos control..

Think I fixed it. Formula for horizontal blur radius hx in terms of vertical blur radius hy (previously omnidirectional blur radius h)

```
height2f = 0.5f * height
z = (y + 0.5f) / height2f - 1.0f;
r2 = 1.0f - z * z;
hx = hy / r2;
```

Show thread

Previous video has bad appearance at the poles when viewed in 360 (after injecting metadata with google spatial-media python tool). I tried jittering the histogram accumulation to blur it but the artifacts remain. I guess I'll have to do the blur in the density estimation pass.

Show thread

Working on my renderer.

Better , doing it with linear histograms instead of logarithmic makes it work with "keep doubling" batch sizes instead of having to do it every small constant batch size. This sped up one test from 18s to 12s.

Also proper interpolation of the via of multiplier and two fixed points on the , remembering that additional are needed and the curve passes through only every third point. The additional points are generated from approximated derivatives at the points where the curve passes through. Animation speed is normalized: parameter found by binary search in a precomputed array of approximate arc lengths.

Also copy/pasted from GIMP, only the first frame is analysed and the resulting bounds are applied to all frames, to avoid strobing from independent frames (better would be to analyse the whole video, but storage is probably a bit of an issue for that).

Also by accumulating discrete subframes of 1-sample-per-pixel each into the histogram, I think the video has 256 samples per pixel total.

Video comparison of the two rendering processes.

With density estimation: starts blurry, gets sharper.

Without density estimation: noise gradually reduces.

Each frame approximately doubles the number of samples per pixel.

Show thread

Without any at all the same number of samples takes only 42mins wall-clock on the same CPU. Colours are different though: there's more contrast, and the highlights are less saturated.

Show thread

I tried to speed up the adaptive by doing it only after each doubling of samples, but it didn't work out. Works much better if you do it after a constant number of samples, which means you need to do it way more often, but later passes should be faster if the image density histogram isn't too pathological.

Attached image is with density estimation after each average 1 sample per pixel accumulation, total number of points plotted is 65550.5 samples per pixel. Render time was 75mins wall-clock on a 16-thread CPU. Image was post-processed to improve contrast (GIMP auto white balance).

style interpolation¹ of going via 2x2 matrix diagonalization² as the linear interpolation base case.

Animation has 4 static transformations, the other 4 are interpolated along a loop between themselves.

Maybe a bit too fast and a bit too low resolution, fixing that is possible by applying more computing power. This version took 3m30s to render.

¹ en.wikipedia.org/wiki/De_Caste
² mathr.co.uk/blog/2015-02-06_in

Switched from float to uint32_t for the accumulation buffer, but at level 14 i got some white artifacts which I think are due to overflow wrapping back to 0. Trying again with uint64_t.

Trying to make it faster as I'm not sure if float are CPU accelerated (they might take locks in software?)

Show thread

looks better with density estimation kernel width factor increased (maximum width is actually a bit lower, but this scaling means the first bunch of samples are blurred smoothly before the later samples with narrow kernels tighten things up)

Show thread
Show more
post.lurk.org

Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.