Metanodes implemented in my gesture synthesizer!

It's hard to explain what metanodes are out of context, but this little sequenced melody performed on an FM oscillator showcases a little bit of what is possible.

The sequenced melody can be represented in this pseudo-lilypond notation:

||: c4 g d' | c d8 e d'4 :||

Metanodes allow one to swap out chunks within a phrase. Here's some more pseudo-notation using s-expressions that represent how Gest is sequencing it:

||: c4 (@ (g4) (d8 e)) d'4 :||

The metanode (@) here cycles between the two melodic fragments here every time it repeats.

Follow

If you're listening to this, you may be hearing slides, slurs, and glissandos that aren't being accounted for in the lilypond notation.

In Gest, these articulations are all explicitly and precisely defined.

The actual gesture producing the notes here can be fully and unambiguously defined using the following notation (not pseudo-code, this actually works):

beg 3 3
t 0 sg
mn 1 2 t 7 sg pr 2 t 2 sg t 4 gl
t 14 mg
end loop fin

· · Web · 1 · 0 · 1

So, to date, there are now metanodes, metatargets, and metabehaviors. All structures that can be used to change looped gestures during a performance.

At the moment, these all just change stuff sequentially, but it would be trivial to make the choices be randomized instead. This would lend itself very well to generative music.

Slightly more non-trivial things I am thinking about are metaphrases (one level up from a metanode), and metavalues (mainly wanted to add randomization to values to do things like add humanization to pitch signals).

Sign in to participate in the conversation
post.lurk.org

Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.