new features implemented:
✅ wake clipping (draw from narrowest to widest, subtracting each from clip region after drawing, prevents overlap)
✅ pattern fills (select from dropdown combo box in toolbar before activating wake tool)
✅ colour fills (select from colour button in toolbar before activating wake tool)
✅ global toggle of colour/monochrome mode (so you can use colours for screen editing, and turn them off for printing)
more ideas to implement:
❎ medium colour mode with differently coloured pattern fills (bit tricky as cairo patterns have their own colour, so would need to create patterns on the fly instead of once at program startup)
❎ colour strokes (should be easy, just boring plumbing the values through the code)
❎ editing patterns / colours of existing annotations (could be hard, but should be possible to add widgets for each annotation in the annotation list)
❎ better algorithm for "find the other ray of the wake" than "next (anti)clockwise match for (pre)period") (maybe require the two rays to be manually selected in the UI, or use screen-space distance of endpoints as a filter) (see nested red areas in colour images attached)
❎ extending rays of wakes (not sure how to do this, the wake makes a copy of the rays' points at wake creation time)
❎ make the filament tool fill its wakes
Turns out it was much simpler to just clamp the potentially huge wake image coordinates to +/-10 in mpfr_t before converting to lower-range double for cairo filling.
The image is roughly +/-1 in that coordinate frame, depending on aspect ratio - clamping may break appearance with very wide images, left a #FIXME note in the code for later.
Switched from Haar wavelets for energy per octave (11 bins), to Discrete Fourier Transform (via the fftw3 library) for energy spectrum (513 bins). Overlap factor 16, raised cosine window.
Enlarged the self-organizing map from 8x8 to 16x16, using Earth-Mover's Distance instead of Euclidean Distance when chosing the best matching unit to update the SOM.
Initial SOM weights initialized via Cholesky decomposition of covariance matrix to generate correlated Gaussian random variates (as before). Using GNU GSL to do the linear algebra and pseudo random number generation.
Still using 1st-order Markov chain for the resynthesis.
Analysis pass takes 16mins per hour of input audio, single threaded. Thinking about parallelism as that's a long wait when experimenting.
Synthesis pass is very quick, less than a second per minute of output audio.
Getting closer to what I want. This time I used a self-organizing map (8x8 cells x11 octaves) to cluster the snippets of audio, then made a 1st-order Markov chain out of the SOM (so an 8x8x8x8 array of weights, the first 8x8 is the past and the last 8x8 is the future).
The attached audio is made by running the Markov chain at random, applying the SOM weights to white noise at each time step.
Audio block size is 1024, overlap factor is 16, raised cosine window function. Each windowed block is converted to octaves via Haar wavelet transform, then each octave is either analysed for RMS energy (analysis pass) or amplified by a factor (synthesis pass). In the synthesis pass the amplified octaves are transformed back to audio, windowed again and overlapped/added with the other blocks.
The synthesis pass generates 5mins of audio in less than 5 seconds on one of my desktop cores, so looking promising for porting to Emscripten to run in browser of low-power mobile devices or wherever.
Feedback process was too hard to control, so I took a different approach: normalizing the non-DC part of the energy table (by RMS) gives good results in one pass. I suspect the reason it doesn't sound very much like speech is because there is no linkage between the different octaves, they each do their own thing independently.
Starting from the Energy Per Octave Per Rhythm table, I tried synthesizing speech-like noise by applying the template to white noise. But this didn't work at all well as the white noise had no rhythmic content to speak of, so amplifying it didn't do much (0 * gain = 0).
Feeding back the output to the input, so the noise becomes progressively more rhythmic, worked a lot better - takes a couple of minutes to escape from silence, and then there are about 5 sweet minutes until it goes all choppy with very loud peaks separated by silences. I tested with the feedback delay synchronous to the analysis windows, trying a desynchronized delay next.
So far I've implemented the timbre stamp algorithm:
c <- haar(control-input)
n <- haar(noise-input)
e <- calculate-energy-per-octave(c)
o <- amplify-octaves-by(n, e)
output <- unhaar(o)
(operating on windowed overlapped chunks)
Attached has a segment of The Archers (BBC Radio 4 serial) as control input, with white noise as noise input. The output is normalized afterwards, otherwise it is very quiet (I suspect because the white noise has little energy in the lower octaves to start with).
Process: for each overlapped windowed segment of audio input, compute energy/octave via Haar wavelet transform. Accumulate count, sum, sum of squared values. At the end of the input, ouput statistics: mean and standard deviation for each octave (normalized to mean 1). Compare output for different input. Think about how to make a classifier based on the output. Think hard about how to make this process differentiable and propagate discrimination results back to changes in the input.
Got the asynchronous task list working. Now I can initiate many annotation tasks and continue zooming while they calculate in the background in as many threads as are available.
The overall GUI layout is bad at the moment (space around image, task list goes off screen without scroll bars, stretching the window, progress bars have tiny height) and the only algorithm I've ported to the new thread pool is Ray Out.
But it's a start.
Seems to work for doubly-embedded Julia sets, though it gets prohibitively slow without multi-core acceleration (~45mins to wait for the annotations on the attached, when it could take 3mins if all my cores were used in parallel). Image radius is about 1e-14, so not very deep, but tracing all those rays takes long.
Working on an asynchronous task queue with a worker pool now. I want each task to show up in the GUI when it is enqueued, with its own progress bar and cancel button; to be removed from GUI when it is done (when completed, it adds the annotation to the image, unless cancelled).
Ideally I will be able to continue interacting while tasks are running, enqueuing new tasks or even navigating to different locations. It remains to be seen whether I will need a priority system to make rendering more responsive.
Automatic annotation progress update:
✅ child bulbs of a mu-unit
✅ filaments of a mu-unit (done, this post)
❎ child islands of a mu-unit
❎ embedded Julia set filaments (next on the list)
❎ embedded Julia set islands
❎ embedded Julia set hubs
Mu-unit is Robert Munafo's terminology: https://www.mrob.com/pub/muency/muunit.html
Add a button to annotate the descendent child bulbs of a component.
Make this button annotate the tuned islands too.
Add rays, namely the periodic pair landing on the root, and pre-periodic pairs pruning the filaments.
✅ first medium part done
❎ hard part needs a completely different approach
❎ second medium part is fairly straightforward, next on the list
I fixed the crash (race condition caused by freeing something in the wrong (non-GTK) thread), and improved the sort ordering too.
Also added an automatic dynamic level of detail feature for the annotations, so when you zoom out you don't get a mess of unreadable overlapping text. Not perfect yet (the rays still bunch up heavily) but better than before.
Trying to analyse the patterns in the external angles of hubs heading towards the other tips of the spokes:
-- influencing island p4
o = .(0011)
i = .(0100)
-- central island p29
O = .(00110011001100110100001100111)
I = .(00110011001100110100001101000)
-- bulb 1/5
b = .(00001)
B = .(00010)
-- main inner hub
-- spoke one (towards inner tip)
-- spoke two
-- spoke three
-- spoke four
-- spoke five (towards central island)
Spoke one is discussed in the previous post in this thread.
Spokes two, three, four seem to be something like: if the last part of the periodic part of the hub is i (resp. o) take the preperiodic part of the tip with periodic part (i) (resp. (o)) and append o (resp. i) combining with the periodic part of the hub.
Continuing along a spoke in the same direction means going along spoke one of each next hub.
Spoke five needs more thought.
Now imagine moving from the bottom 28p4 hub towards the bottom right 30p4 tip, via the main hub in between at each step. The tip has rays
and the initial hub has rays
Each next hub along has preperiod 4 higher, and in the limit the rays of the hub must approximate the rays of the tip, so it makes sense to append an 'i' or 'o' to the preperiodic part depending if the hub's ray is above or below the tip. Thus the next hub along has rays
and the next
and the next
and so on. In the limit an infinite number of i or o will be inserted, so the remainder is never reached, and the periodic part is just (i) or (o).
The tip at the other end (the side closer to the main body of the #Mandelbrot set) has rays
and the initial hub has rays
and moving towards the tip gives
and so on.
making art with maths and algorithms