I've uploaded the tracks I've made so far to this instance I joined months ago: open.audio/channels/looptober2


And for those who really want to see a mess, I've uploaded to my wiki the org document with all the scheme code used to generate the loops:


· · Web · 2 · 1 · 6

Every loop I'm making features , so no human produced sounds. Nothing here is human performed either (such as from a MIDI controller). It's all very precisely sequenced via a notation language using my gesture sequencer.

Anything that relates to tempo fluctuations utilizes something called temporal weight, giving notes mass and inertia that warp the global tempo, inverting the typical relationship seen in DAWs.

I also sometimes do things like having meter in a bar be exponential instead of linear. I call this temporal skewing.

Finally, there's various glissandi that happens between notes. How one travels from A to B is know as a behavior, and this is explicitly programmed into a phrase.

@paul I really like your tracks. Could we imagine a synth voice pedal taking real voice as input, à la vocoder?

@raphael You could use something like an LPC10 codec encoder/decoder to convert speech into speak-n-spell like sounds. I've done this before, but it does add a few ms of latency. This would be different than the technique I am using however, which is articulatory synthesis.

@paul I understand. The precision is quite outstanding in your examples. I was curious how it could possibly interact in live environments. Thanks for your answer.

@raphael thank you! It's all made possible thanks to this gesture synthesizer I've been working on, which lends itself quite well for sequencing vocal synthesizers and other lyrical instruments.

Sign in to participate in the conversation

Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.