The new stuff (wakes, patterns, colours, dark vs light theme, colour vs monochrome theme) isn't serialized yet, and it will be a pain doing it in a backward compatible way (will be annoying if loading old parameters is no longer possible in new versions - I think I already broke it once when adding line dashing or so...).

new features implemented:

✅ wake clipping (draw from narrowest to widest, subtracting each from clip region after drawing, prevents overlap)

✅ pattern fills (select from dropdown combo box in toolbar before activating wake tool)

✅ colour fills (select from colour button in toolbar before activating wake tool)

✅ global toggle of colour/monochrome mode (so you can use colours for screen editing, and turn them off for printing)

more ideas to implement:

❎ medium colour mode with differently coloured pattern fills (bit tricky as cairo patterns have their own colour, so would need to create patterns on the fly instead of once at program startup)

❎ colour strokes (should be easy, just boring plumbing the values through the code)

❎ editing patterns / colours of existing annotations (could be hard, but should be possible to add widgets for each annotation in the annotation list)

❎ better algorithm for "find the other ray of the wake" than "next (anti)clockwise match for (pre)period") (maybe require the two rays to be manually selected in the UI, or use screen-space distance of endpoints as a filter) (see nested red areas in colour images attached)

❎ extending rays of wakes (not sure how to do this, the wake makes a copy of the rays' points at wake creation time)

❎ make the filament tool fill its wakes

Turns out it was much simpler to just clamp the potentially huge wake image coordinates to +/-10 in mpfr_t before converting to lower-range double for cairo filling.

The image is roughly +/-1 in that coordinate frame, depending on aspect ratio - clamping may break appearance with very wide images, left a note in the code for later.

feature idea: fill wakes of ray pairs

- compute lowest iteration count in image (which occurs on its boundary)

- follow lower ray outwards from endpoint P0 to that count, keeping track of the last intersection P1 with image edges

- follow upper ray outwards from endpoint Q0 to that count, keeping track of all intersections with image edges

- find first intersection Q1 of upper ray with image edge anticlockwise from P1

- fill region P0-(along lower ray)-P1-(along image edges)-Q1-(along upper ray)-Q0-(close loop)-P0

- this is so complicated because rays may have multiple segments within the image, and naive filling of the whole ray extent to a fixed large radius at deep zooms will overflow libcairo's number types and explode everything in NaNs

- it may even need to be yet more complicated, considering deep zooms off-centre from spirals, where the above could still overflow: solution may be to compute all intersections with image edges of both rays along with iteration counts at those points and whether the ray is leaving or entering the image, so that they can be ordered semantically, with the direction of drawing at image edges determined by consistency

feature idea: fill hyperbolic components

- trace boundary using Newton's method in two complex variables

- adaptive subdivision of visible parts, or

- compute control points for cubic Bezier spline segments so that the curve passes through the desired points

- make sure cusps of cardioids are sharp

- be careful near roots of circles

- pattern fill for low-ink printing

✅ click to select

Mouse clicks in the exploration window can select nearby annotations in the annotation list. Wasn't as hard as expected, implementation was as described at the top of this thread.

Screencast:
mathr.co.uk/mandelbrot/2019-10 1920x1080p60 2mins 15MB
(also showing light vs dark theme, line dashing patterns, and version string in title bar)

Another idea for a feature: option to filter the annotation list to list only annotations that are visible in the exploration window.

Switched from Haar wavelets for energy per octave (11 bins), to Discrete Fourier Transform (via the fftw3 library) for energy spectrum (513 bins). Overlap factor 16, raised cosine window.

Enlarged the self-organizing map from 8x8 to 16x16, using Earth-Mover's Distance instead of Euclidean Distance when chosing the best matching unit to update the SOM.

Initial SOM weights initialized via Cholesky decomposition of covariance matrix to generate correlated Gaussian random variates (as before). Using GNU GSL to do the linear algebra and pseudo random number generation.

Still using 1st-order Markov chain for the resynthesis.

Analysis pass takes 16mins per hour of input audio, single threaded. Thinking about parallelism as that's a long wait when experimenting.

Synthesis pass is very quick, less than a second per minute of output audio.

Refs:
fftw.org/fftw3_doc/The-Halfcom
en.wikipedia.org/wiki/Earth_mo
en.wikipedia.org/wiki/Self-org
en.wikipedia.org/wiki/Cholesky

Ugh. Spent an hour or two trying to debug my tile assembler, only to remember at a late stage that "guessing" does strange things with log(de) colouring sometimes, and indeed with guessing disabled in the assembled image is perfect.

Assembling tiled images with OpenEXR is pretty neat, you can just specify the x/y strides in the read framebuffer slices and the library does all the scattering for you.

mathr.co.uk/exrtact microsite for the project.

modulating a drone (my Harmonic Protocol thingy) instead of white noise

Getting closer to what I want. This time I used a self-organizing map (8x8 cells x11 octaves) to cluster the snippets of audio, then made a 1st-order Markov chain out of the SOM (so an 8x8x8x8 array of weights, the first 8x8 is the past and the last 8x8 is the future).

The attached audio is made by running the Markov chain at random, applying the SOM weights to white noise at each time step.

Audio block size is 1024, overlap factor is 16, raised cosine window function. Each windowed block is converted to octaves via Haar wavelet transform, then each octave is either analysed for RMS energy (analysis pass) or amplified by a factor (synthesis pass). In the synthesis pass the amplified octaves are transformed back to audio, windowed again and overlapped/added with the other blocks.

The synthesis pass generates 5mins of audio in less than 5 seconds on one of my desktop cores, so looking promising for porting to Emscripten to run in browser of low-power mobile devices or wherever.

linux woes 

@celesteh You need the [separator] object to push and pop the stack and make behave sensibly. I've heard good things about ofelia too, is newer than gem so might have less baggage from history.

Feedback process was too hard to control, so I took a different approach: normalizing the non-DC part of the energy table (by RMS) gives good results in one pass. I suspect the reason it doesn't sound very much like speech is because there is no linkage between the different octaves, they each do their own thing independently.

Starting from the Energy Per Octave Per Rhythm table, I tried synthesizing speech-like noise by applying the template to white noise. But this didn't work at all well as the white noise had no rhythmic content to speak of, so amplifying it didn't do much (0 * gain = 0).

Feeding back the output to the input, so the noise becomes progressively more rhythmic, worked a lot better - takes a couple of minutes to escape from silence, and then there are about 5 sweet minutes until it goes all choppy with very loud peaks separated by silences. I tested with the feedback delay synchronous to the analysis windows, trying a desynchronized delay next.

Energy per Octave per Rhythm via repeated Haar wavelet transform. Raised cosine window for energy per octave, then rectangular window for (energy per octave) per octave (rhythmically). The rhythm 0 is at DC, ie average over all time.

@sciss good to know, I hadn't heard that term before

@sciss Haar transform of a 2^N buffer is basically:

- find sum and difference of each (non-overlapping) pair of adjacent samples (therefore *not* time invariant!)
- put the sums in the first half
- put the differences in the second half
- repeat on the first half, until length is 1

In code:
```C
// compute Haar wavelet transform
// INPUT_LENGTH must be a power of two
// input and output are in haar[src]
// haar[dst] is used for temporary storage
float haar[2][INPUT_LENGTH];
const int src = 0;
const int dst = 1;
for (int length = INPUT_LENGTH >> 1; length > 0; length >>= 1)
{
for (int i = 0; i < length; ++i)
{
float a = haar[src][2 * i + 0];
float b = haar[src][2 * i + 1];
float s = (a + b) / 2;
float d = (a - b) / 2;
haar[dst][ i] = s;
haar[dst][length + i] = d;
}
for (int i = 0; i < INPUT_LENGTH; ++i)
haar[src][i] = haar[dst][i];
}
```
Based on some Java code I found on Wikipedia or elsewhere that I can't find again right now, but I added the divisions by 2 for more usable normalization.

So far I've implemented the timbre stamp algorithm:

c <- haar(control-input)
n <- haar(noise-input)
e <- calculate-energy-per-octave(c)
o <- amplify-octaves-by(n, e)
output <- unhaar(o)

(operating on windowed overlapped chunks)

Attached has a segment of The Archers (BBC Radio 4 serial) as control input, with white noise as noise input. The output is normalized afterwards, otherwise it is very quiet (I suspect because the white noise has little energy in the lower octaves to start with).

Show more
post.lurk.org

Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that. This is part of a family of services that include mailing lists, group chat, and XMPP.