The attached has around 8k samples per pixel, taking around 1min/frame (1 hour total). 256k subframes for motion blur, each being a single path of 16k iterations. Only plain simple #MoebiusTransformation stuff without xaos control..
@mathr it looks crazy indeed .... I could not resist to try it in a 360 video template in AFrame ... and it's indeed pretty wild ;)
Think I fixed it. Formula for horizontal blur radius hx in terms of vertical blur radius hy (previously omnidirectional blur radius h)
height2f = 0.5f * height
z = (y + 0.5f) / height2f - 1.0f;
r2 = 1.0f - z * z;
hx = hy / r2;
Previous video has bad appearance at the poles when viewed in 360 (after injecting metadata with google spatial-media python tool). I tried jittering the histogram accumulation to blur it but the artifacts remain. I guess I'll have to do the blur in the density estimation pass.
Working on my #FractalFlame renderer.
Better #DensityEstimation, doing it with linear histograms instead of logarithmic makes it work with "keep doubling" batch sizes instead of having to do it every small constant batch size. This sped up one test from 18s to 12s.
Also proper #BézierCurve interpolation of the #MoebiusTransformation via #Slerp of multiplier and two fixed points on the #RiemannSphere, remembering that additional #ControlPoints are needed and the curve passes through only every third point. The additional points are generated from approximated derivatives at the points where the curve passes through. Animation speed is normalized: parameter found by binary search in a precomputed array of approximate arc lengths.
Also #AutoWhiteBalance copy/pasted from GIMP, only the first frame is analysed and the resulting bounds are applied to all frames, to avoid strobing from independent frames (better would be to analyse the whole video, but storage is probably a bit of an issue for that).
Also #MotionBlur by accumulating discrete subframes of 1-sample-per-pixel each into the histogram, I think the video has 256 samples per pixel total.
Video comparison of the two rendering processes.
With density estimation: starts blurry, gets sharper.
Without density estimation: noise gradually reduces.
Each frame approximately doubles the number of samples per pixel.
Comparison of the two images.
If somebody I'd seen around the neighbourhood started asking me for the phone numbers of my friends or family members, I'd say no. Or at least I'd ask first.
So why the fuck are my friends giving away my phone number* to random companies I've never heard of?
* "Sharing contacts" or giving permission to access contacts is basically this, right?
I think I'll see if I can get my parents to install chromium and try jitsi.
I tried to speed up the adaptive #DensityEstimation by doing it only after each doubling of samples, but it didn't work out. Works much better if you do it after a constant number of samples, which means you need to do it way more often, but later passes should be faster if the image density histogram isn't too pathological.
Attached image is with density estimation after each average 1 sample per pixel accumulation, total number of points plotted is 65550.5 samples per pixel. Render time was 75mins wall-clock on a 16-thread CPU. Image was post-processed to improve contrast (GIMP auto white balance).
Here's how I ask for open-source in my mutual aid groups, when they're taking about setting up new digital tools:
I don't have the bandwidth to involve myself in any of the tech stuff, but have a strong personal preference for open-sourced software. Is there a chance either of you are already familiar with <open-source alternative>, so could set it up instead?
Again, FOSS nerds, mutual aid groups need your help. Stay on your ass but close the video games.
I thought desktop Linux was bloated but it turns out the 4x hugetlb 1GB pages I set up a while ago for a voxel experiment are preallocated and not swapped out. So my current experiment was swapping hard when using 29GB even though I have.32GB RAM. Disabled the hugetlb stuff for now and it works as expected. There were loads of bloaty things left after stopping lightdm but a sudo killall -u claude fixed that.
Animation has 4 static transformations, the other 4 are interpolated along a loop between themselves.
Maybe a bit too fast and a bit too low resolution, fixing that is possible by applying more computing power. This version took 3m30s to render.
Turned out that another completely unrelated process was using 100% of one core, stopped that temporarily and the parallel code went up to 100% on all cores. Problem solved.
uint64_t fixes the artifacts. Didn't think to measure the speed before/after.
Switched from float to uint32_t for the accumulation buffer, but at level 14 i got some white artifacts which I think are due to overflow wrapping back to 0. Trying again with uint64_t.
looks better with density estimation kernel width factor increased (maximum width is actually a bit lower, but this scaling means the first bunch of samples are blurred smoothly before the later samples with narrow kernels tighten things up)
> "Adaptive filtering for progressive Monte Carlo image rendering" (2000) by Frank Suykens , Yves D. Willems
> https://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.142.2366 (links to 2.4MB PDF)
> "The Fractal Flame Algorithm" (2003-2008) by Scott Draves, Erik Reckase
> https://flam3.com/flame_draves.pdf (22MB(!) PDF)
Seems to work well!
I haven't yet optimized the first pass with fixed radius, if I use a separated blur kernel (just blur horizontally then blur the result vertically) it should be a lot faster for most images (for sparse dots it might be slower, but they're boring).
making art with maths and algorithms
Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.