Show more

Ported my renderer to . Even on the is way too slow to be useful, it's quicker to get the same visual quality by plotting a zillion points.

The attached has around 8k samples per pixel, taking around 1min/frame (1 hour total). 256k subframes for motion blur, each being a single path of 16k iterations. Only plain simple stuff without xaos control..

claude boosted

@mathr it looks crazy indeed .... I could not resist to try it in a 360 video template in AFrame ... and it's indeed pretty wild ;)
https://olme.noho.st/webvr/hello-world/360video-mathrcl.html

Think I fixed it. Formula for horizontal blur radius hx in terms of vertical blur radius hy (previously omnidirectional blur radius h)

```
height2f = 0.5f * height
z = (y + 0.5f) / height2f - 1.0f;
r2 = 1.0f - z * z;
hx = hy / r2;
```

Show thread

Previous video has bad appearance at the poles when viewed in 360 (after injecting metadata with google spatial-media python tool). I tried jittering the histogram accumulation to blur it but the artifacts remain. I guess I'll have to do the blur in the density estimation pass.

Show thread

Working on my renderer.

Better , doing it with linear histograms instead of logarithmic makes it work with "keep doubling" batch sizes instead of having to do it every small constant batch size. This sped up one test from 18s to 12s.

Also proper interpolation of the via of multiplier and two fixed points on the , remembering that additional are needed and the curve passes through only every third point. The additional points are generated from approximated derivatives at the points where the curve passes through. Animation speed is normalized: parameter found by binary search in a precomputed array of approximate arc lengths.

Also copy/pasted from GIMP, only the first frame is analysed and the resulting bounds are applied to all frames, to avoid strobing from independent frames (better would be to analyse the whole video, but storage is probably a bit of an issue for that).

Also by accumulating discrete subframes of 1-sample-per-pixel each into the histogram, I think the video has 256 samples per pixel total.

I made the video comparison by hand in gimp using layer groups, was a bit slow manual labour but probably quicker than figuring out how to script it in ...

Show thread

Video comparison of the two rendering processes.

With density estimation: starts blurry, gets sharper.

Without density estimation: noise gradually reduces.

Each frame approximately doubles the number of samples per pixel.

Show thread

Without any at all the same number of samples takes only 42mins wall-clock on the same CPU. Colours are different though: there's more contrast, and the highlights are less saturated.

Show thread
claude boosted

If somebody I'd seen around the neighbourhood started asking me for the phone numbers of my friends or family members, I'd say no. Or at least I'd ask first.

So why the fuck are my friends giving away my phone number* to random companies I've never heard of?

* "Sharing contacts" or giving permission to access contacts is basically this, right?

I tried meet.jit.si for and it worked without any issues (a bit of latency shifting between video and sound at times, but no big deal). No account needed.

Contrasting with where sound was garbled (not to mention the major privacy concerns) and which also had sound issues (and needed a phone number for account activation SMS).

I think I'll see if I can get my parents to install chromium and try jitsi.

I tried to speed up the adaptive by doing it only after each doubling of samples, but it didn't work out. Works much better if you do it after a constant number of samples, which means you need to do it way more often, but later passes should be faster if the image density histogram isn't too pathological.

Attached image is with density estimation after each average 1 sample per pixel accumulation, total number of points plotted is 65550.5 samples per pixel. Render time was 75mins wall-clock on a 16-thread CPU. Image was post-processed to improve contrast (GIMP auto white balance).

claude boosted

Here's how I ask for open-source in my mutual aid groups, when they're taking about setting up new digital tools:

I don't have the bandwidth to involve myself in any of the tech stuff, but have a strong personal preference for open-sourced software. Is there a chance either of you are already familiar with <open-source alternative>, so could set it up instead?

---

Again, FOSS nerds, mutual aid groups need your help. Stay on your ass but close the video games.

I thought desktop Linux was bloated but it turns out the 4x hugetlb 1GB pages I set up a while ago for a voxel experiment are preallocated and not swapped out. So my current experiment was swapping hard when using 29GB even though I have.32GB RAM. Disabled the hugetlb stuff for now and it works as expected. There were loads of bloaty things left after stopping lightdm but a sudo killall -u claude fixed that.

style interpolation¹ of going via 2x2 matrix diagonalization² as the linear interpolation base case.

Animation has 4 static transformations, the other 4 are interpolated along a loop between themselves.

Maybe a bit too fast and a bit too low resolution, fixing that is possible by applying more computing power. This version took 3m30s to render.

¹ en.wikipedia.org/wiki/De_Caste
² mathr.co.uk/blog/2015-02-06_in

Was trying to some - CPU usage was only 75% of each core when I expected 100%. Was worried there might be excessive synchronisation bugs...

Turned out that another completely unrelated process was using 100% of one core, stopped that temporarily and the parallel code went up to 100% on all cores. Problem solved.

Lesson learned: a clean is important for and .

uint64_t fixes the artifacts. Didn't think to measure the speed before/after.

Show thread

Switched from float to uint32_t for the accumulation buffer, but at level 14 i got some white artifacts which I think are due to overflow wrapping back to 0. Trying again with uint64_t.

Trying to make it faster as I'm not sure if float are CPU accelerated (they might take locks in software?)

Show thread

looks better with density estimation kernel width factor increased (maximum width is actually a bit lower, but this scaling means the first bunch of samples are blurred smoothly before the later samples with narrow kernels tighten things up)

Show thread

I implemented

> "Adaptive filtering for progressive Monte Carlo image rendering" (2000) by Frank Suykens , Yves D. Willems
> citeseerx.ist.psu.edu/viewdoc/ (links to 2.4MB PDF)

for a toy () renderer, following the plan of

> "The Fractal Flame Algorithm" (2003-2008) by Scott Draves, Erik Reckase
> flam3.com/flame_draves.pdf (22MB(!) PDF)

Seems to work well!

I haven't yet optimized the first pass with fixed radius, if I use a separated blur kernel (just blur horizontally then blur the result vertically) it should be a lot faster for most images (for sparse dots it might be slower, but they're boring).

Show more
post.lurk.org

Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.