If you're listening to this, you may be hearing slides, slurs, and glissandos that aren't being accounted for in the lilypond notation.
In Gest, these articulations are all explicitly and precisely defined.
The actual gesture producing the notes here can be fully and unambiguously defined using the following notation (not pseudo-code, this actually works):
beg 3 3
t 0 sg
mn 1 2 t 7 sg pr 2 t 2 sg t 4 gl
t 14 mg
end loop fin
So, to date, there are now metanodes, metatargets, and metabehaviors. All structures that can be used to change looped gestures during a performance.
At the moment, these all just change stuff sequentially, but it would be trivial to make the choices be randomized instead. This would lend itself very well to generative music.
Slightly more non-trivial things I am thinking about are metaphrases (one level up from a metanode), and metavalues (mainly wanted to add randomization to values to do things like add humanization to pitch signals).
Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.