Added auto-synchronization to my rephasor algorithm, a process which resynthesizes an input phasor signal (periodic ramp signal), using a scaling value to change the frequency in relative terms (2=twice as fast, 0.5=twice as slow, etc).
This sound demo consists of 3 oscillators which pitches are being modulated by phasor signals. 2 of these phasors are rephasors with synchronization, sourced from the other phasor.
In the patch, these rephasors speed up and then slow down back to the original rate.
Notice how the voices settle back to the original rate, they gradually line up again.
And here is the same patch using old version of the algorithm.
Without autosync, rephasors will suffer from time drift, even if they are playing at the same rate.
Right where the voices settle down, you can hear some voices consistently playing at the wrong time. It never lines up. This is the "drift".
A new model for a gesture sequencer can be reimagined as a DSP block diagram.
Instead of reading from a fairly linear score like Gest, this new system can be controlled in a more nonlinear way using a state machine and/or VM.
It would be a much more elegant system. A VM would allow for more generative musical structures to emerge. Also, multiple gestures could share the same VM, allowing for concurrent cross-communication between gestures.
@paul Where can I read about Gest, or gestural sequencers? Not sure I understand the term, but I’m interested in composable/combinable sequencers, which it sounds like you’re talking about?
Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.