Show more

@celesteh

Just saw this in my inbox from KFJC radio station:

---8<---
KFJC Month of Mayhem 2021

Music From Mills, Part 1
Saturday Mayhem 1 6PM-8PM

Music from Mills Part 2
Saturday Mayhem 8 at 6PM-8PM

Hosted by Cousin Mary

Recent news of the scheduled closure of Mills College has hit the Bay Area experimental music community especially hard. Join Cousin Mary in an exploration of the music associated with Mills College graduates and faculty.
---8<---

Working on a thing to sonify silent colour videos.

One [vcf~] per pixel with center frequency Hz determined by hue and Q determined by saturation (just using HSV for now, nothing fancy).

One 20ms delay line per column of pixels

Input to each column's filters is a weighted sum of nearby delay lines.

Output is panned to stereo according to column's position in space

Decidedly not realtime even with OpenMP parallelism on 16 CPU threads.

(though maybe if I ported it to OpenCL and used single precision on GPU and kept video resolution low...)

Smooth droney sounds.

Maybe I can make it richer with some [hilbert~] magic (given a sinusoid, hilbert~ outputs 2 sinusoids approximately 90degrees out of phase for "most" of the spectrum, allowing you to get a [phasor~] out of it, which you can then waveshape at will - I'm thinking Chebyshev polynomial recursion to get bandlimited sawtooth waves).

good news: testing some fractal perturbation algorithms with single precision shows it still works (no need to use double precision, at least for precision purposes). regular perturbed iterations go to zoom 1e30 or so and rescaled iterations go to 1eLARGE where large is logBase 10 (2^(2^31 - 1)). this is good because single precision takes half the memory and computes a bit faster (about 30%-50% faster in various Mandelbrot power 2 tests using OpenCL on both GPU and CPU).

bad news: the smaller range of single precision float means "deep needle" Burning Ship iterations are considered to be "close to 0" much more often, meaning rescaled single precision float can be significantly slower than rescaled double: 40 seconds vs 3.0 seconds in one test on CPU.

this makes automatically choosing the best number type a bit tricky: it depends on the fractal type, location in the fractal, and the available hardware / drivers / etc

Bit disillusioned with tooling.

Tried to test one of my projects on my tablet and `cabal v2-install` wanted to install Cabal-3.0.2.0 even though a perfectly fine version was already installed. No amount of --constraint="Cabal installed", --constraint="Cabal < 3.0.2", --allow-older="Cabal", etc, would convince it otherwise.

The problem with trying to install Cabal-3.0.2.0 is that the tablet does not have enough (1GB is not enough). This is a big problem for the fundamental build tool of the language.

Eventually got my project to compile with `cabal v1-install` without needing to upgrade Cabal, but that option is going away (may already be gone in latest cabal-install, not sure).

fractal perturbation maths summary 

KF scaled double progress: derivatives for analytic distance estimation now work on regular CPU and OpenCL implementations.

OpenCL speedups over the last released KF are about 2.5x, except for Burning Ship "deep needle" locations where the new version is slightly slower. Guessing was disabled in these tests, and series approximation was disabled for Burning Ship (still broken).

CPU speedups over the last released KF are about 2x, except for Burning Ship "deep needle" locations where the new version is 3x slower.

I added a summary of the maths to a page on the fractal wiki:
fractalwiki.org/wiki/Rescaled_

Show thread

london politics, nominative determinism 

ukip's candidate for mayor is called "gammons". there is also a candidate called "london".

full list of candidates:
bbc.co.uk/news/uk-england-lond

fractal perturbation maths summary 

I found a bug, Burning Ship (scaled double + OpenCL + series approximation) is broken, disabling any one of the three seems to fix it. Hopefully something simple. With series approximation and guessing disabled, one test case (6400x3600 e407) rendered in 6m55 with OpenCL on my GPU vs 11m21 for the regular CPU implementation. Disabling guessing seems to speed up OpenCL a lot, at least when there is negligible interior.

Show thread

fractal perturbation maths summary 

aha, the mistake in [11] was sign(X*Y) instead of

[12]: sign(X)*sign(Y)

the latter works fine, but is still very slow. Presumably this is because X and Y are small enough that X*Y underflows to 0, while sign(X)*sign(Y) is non-zero.

Timing is about the same, because most iterations are the floatexp ones.

Show thread

fractal perturbation maths summary 

I might have made a mistake, but it seem the optimisation in [11] is problematic after all. At a 1e807 zoom near the needle, it gives incorrect rendering.

Moreover, with this optimisation disabled, it's slow:

scaled double 45s
long double 42s
floatexp 35s

This is because near the needle all the iterations have an X or Y or G*(X^2+Y^2) near 0 (because Y is near 0), which means floatexp iterations will be done anyway. Using floatexp from the get go avoids many branches and rescaling in the inner loop, so it's significantly faster.

And, worse, the extra complexity makes it much slower than previous versions of KF:

scaled double (N/A)
long double 9.1s
floatexp 32s

Show thread

fractal perturbation maths summary 

when perturbing the burning ship and other "abs variations", one ends up with things like

[8]: |XY + Xy + xY + xy| - |XY|

which naively gives 0 by catastrophic absorption and cancellation. laser blaster made a case analysis fractalforums.com/new-theories which can be rewritten as

[9]: diffabs(c, d) := |c+d| - |c| = if sign(c) == sign(c + d) then sign(c) * d else -sign(c) * (2*c+d)

when d is small the first case much more likely. with rescaling in the mix [8] works out as

[10]: diffabs(XY/s, Xy + xY + sxy)

which has the risk of overflow when 's' is small, but the signs work out ok even for infinite 'c' as 'd' is known to be finite.

moreover, if s = 0 due to underflow, the first branch will always be taken (except when XY is small, when a floatexp iteration will be performed instead). and as s >= 0 by construction, diffabs(XY/s, Xy + xY + sxy) reduces to

[11]: sign(X Y) * (X y + x Y)

with this implemented in KF, the first benchmarks are in at zoom depth 1e449:

floatexp 135s
long double 52.1s
scaled double 34.5s

a nice speedup!

I tried this before for fixed scaling per zoom depth (for 1e300-1e600), but I think the reason it didn't work was the lack of floatexp iterations when XY was small. need to test the new implementation on the needle to be sure...

Show thread
claude boosted

@mathr would be nice if someone submit an historical overview of all the distro/branches/fork lineage, animal, pd, pure devil, desiredata, l2-ork, pd extended, purr-data, etc.

claude boosted
claude boosted

10% of techhy websites sending interest-cohort=() will have neglible effect on the efficacy of google's spyware

10% of techy websites blocking the browser would have an effect

claude boosted

ArtFutura @ Watermans present VideoMapping Workshop
Saturday 17 & Sunday 18
£15
Narrating Structures: Videomapping Workshop. Special online edition
‘Narrating Structures’ is an intensive workshop where we introduce projection mapping.
We will look at diverse techniques for the production of audiovisual projects video and light mapping and its application in artistic and creative practice. In this edition for Art Futura, we will be focusing on how to stream your mapping content into a streaming service.
This online edition will present a 2-day workshop in sections with hands-on practice.

watermans.org.uk/events/narrat

fractal perturbation maths summary 

the other part of the thing that K I Martin's sft_maths.pdf popularized was that iteration of [3] gives a polynomial series in c:

[6]: z_n = \sum A_{n,k} c^k

(with 0 constant term). This can be used to "skip" a whole bunch of iterations, assuming that truncating the series doesn't cause too much trouble(*)

substituting [6] into [3] gives

[7]: \sum A_{n+1},k c^k = 2 Z \sum A_{n,k} c^k + (\sum A_{n,k} c^k)^2 + c

equating coefficients of c^k gives recurrence relations for the series coefficients A_{n,k}, see mathr.co.uk/blog/2016-03-06_si

(*) the traditional way to evaluate that it's ok to do the series approximation at an iteration is to check whether it doesn't deviate too far from regular iterations (or perturbation iterations) at a collection of "probe" points. when it starts to deviate, roll back an iteration and initialize all the image pixels with [6] at that iteration

Show thread

fractal perturbation maths summary 

optimization: if S underflowed to 0 in unscaled double, you don't need to calculate the + S w^2 term at all when Z is not small. when Z is small you need the full range S (floatexp)

optimization: similarly you can skip the + d if it underflowed.

optimization: for higher powers there will be terms involving S^2 w^3 (for example), which might not need to be calculated due to underflow.

ideally these tests would be performed once at rescaling time, instead of in every inner loop iteration (though they would be highly predictable I suppose).

Show thread

fractal perturbation maths summary 

start with the iteration formula
[1]: Z -> Z^2 + C

perturb the variables with unevaluated sums
[2]: (Z + z) -> (Z + z)^2 + (C + c)

do symbolic algebra to avoid the catastrophic absorption when adding tiny values z to large values Z
[3]: z -> 2 Z z + z^2 + c

scale the values to avoid underflow (substitute S w = z and S d = c)
[4]: S w -> 2 Z S w + S^2 w^2 + S d

cancel out one scale factor S throughout
[5]: w -> 2 Z w + S w^2 + d

choose S so that |w| is around 1. when |w| is at risk of overflow (or underflow), redo the scaling; this is typically a few hundred iterations as |Z|<=2.

now C, Z is the "reference" orbit, computed in high precision using [1] and rounded to (unscaled) double, which works fine most of the time. c, z are the "pixel" orbit, you can do many of these near each reference (e.g. an entire image).

problem: if |Z+z| << |Z| at any iteration, glitches can occur. See fractalforums.com/announcement
solution: retry with a new reference, or (only works for some formulas) rebase to a new reference and carry on

problem: if |Z| is very small, it can underflow to 0 in unscaled double in [5], so one needs to do a full range (e.g. floatexp) iteration at those points. It also means that |w| can change dramatically so rescaling is necessary. See fractalforums.org/programming/

Show thread

Turns out glitch detection was fine all along. The problem was transiting via low-range double between the series approximation output and perturbation input, thus the values underflowed to 0 causing problems. Passing through the extended range doubles ("floatexp", with a wider exponent stored separately) instead of converting fixed this bug.

Speed report: in one location, scaled double takes about 2/3 the time as x87 long double, and 1/4 the time as floatexp. Nice acceleration, albeit as yet only for power 2 and power 3 (untested so far) Mandelbrot set formula.

Show thread

Got scaled iterations partly working. Only the code path without derivatives or SIMD so far.

Some trouble with glitch detection though :( Seems some glitches are not detected, leading to bad images...

Debugging this is painful because it's in an XSLT file that generates vast amounts of C++ code (all the inner loops for all the formulas) that takes over 10 minutes to compile.

Show thread
Show more
post.lurk.org

Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.