Formant frequency control and voice part control as well (soprano, alto, counter-tenor, tenor, bass) with linear interpolation.
In this example, I've mapped pitch to voice part, so it should be more soprano-y at the top, and more bass-y at the bottom. I think...
A simple LFO sweeping through the vowel states in fixed order.
Implementing a vowel filter for #sndkit!
I made a little notation language for making gestures in Gest:
"beg 3 3 t 60 sg pr 2 t 63 sg t 60 gl
pr 3 t 62 sg t 60 sg t 59 gl end loop fin"
produces the gesture used to control pitch below.
It's only a small abstraction above the low-level commands used to populate gestures, but it sure will save some keystrokes!
I felt the need to add some glissando behaviors to my gesture sequencer because of all the pitch sequencing I was doing.
I've added two kinds of glissando behavior. A regular gliss that performs glissando in the last half of the note, and a small more subtle gliss that only does glissando in the last 10% of the note. Both use cubic slope.
The demo below showcases both glissando behaviors.
gesture sequencer updates, some code
I'm now able to sequence gestures using LIL, the scripting language included with sndkit.
The following LIL snippet uses two instances of Gest to control an FM oscillator. One controls pitch (via the function sequence), the other timbre (via the function modindex):
The meaningful thing to extract from this is that gestures are programmed using a set of low-level commands. These commands will create phrases that take up a fixed number of beats, populate these phrases with Ramp Trees, and cap the leaf nodes with targets with behaviors that determine the interpolation method used to go from one target to another.
Sets of simple low-level commands like the ones above are a programmers best friend, because they lend themselves well towards higher-level abstractions :)
Just merged all the gesture tests I made for myself into one medley, and I have to say the results are quite satisfying.
Again, this is just one gesture sequencer controlling only the pitch of a single FM oscillator. This is all step-sequenced, no human recorded performance. Very surprised with how natural and fluid it feels.
And it's all externally clocked so it plays well with others!
of course, this is just one interpretation of how to phrase it.
Instead of slowing down to to the high note, one could speed up in anticipation before dramatically slowing down at the climax. More or less inverting the mass changes.
Sure this version sounds a little bit unnatural, and not my favorite, but with a bit of tweaking it is on it's way to being a valid interpretation.
This sort of thinking starts to get at the "hows" of computer-performed music and not the "whats", which is something I've been thinking deeply about:
Here's some temporal weight in action!
I'm tweaking the masses and inertias in some of the targets so that time compresses while reaching the high note, and expands on the following quarter note triplet.
Slightly exaggerated for dramatic effect!
Here it is with an FM oscillator instead of a sine wave. A bit more spectrum to play with.
Gesture sequencing continue!
After some bug fixing, polyramps can stack now!
This melodic phrase is a gesture that has eighth notes, quintuplet eighth notes, and quarter/eighth note triplets. It's on a loop, so I've programmed the last note to glissando back into the first note. This is possible because it's all a continuous audio-rate signal.
Keep in mind there is no predetermined grid here. All that's given is a phasor signal pulsing out beats like a conductor would (this is converted into a metronome heard in the recording). The sequencer is interpreting those beats and doing the subdivisions dynamically, just like a human performer would do.
gestures synthesizer updates
I added "step" behavior to make gestures sound more like a traditional step sequencer. Also makes it easier to debug.
I also implemented monoramp transformation.
So, if a polyramp takes a single 0-1 ramp and divides it into N equal steps, a monoramp takes N ramps and merges it into 1 ramp. From there, a polyramp could be applied to it, creating arbitrary divisions of rhythm.
This gesture example features two eighth notes, followed by a set of quarter-note triplets.
The quarter note triplets were made by creating a monoramp from 2 beats, then converting that monoramp into a polyramp of 3 beats.
I let the last note have linear behavior so it could dynamically gliss back into itself.
I found the issue. The good news is it was a mistake I made and not the fundamental problem with error accumulation I was dreading. That one can wait for another day.
Here's the gesture with a metronome attached to it. Note how this gesture is moving in time with the beat of the metronome.
In theory, I should be able to slow down the tempo and the gesture would automatically stay synchronized to it without having any prior knowledge about the tempo changes.
Here's the gesture mapped to the frequency of sine wave.
Ah. much better.
Now with targets applied. Those abrupt line discontinuities scream "I'm off by one ya goof!"
Here is a plot of the underlying ramptree signal. You can clearly see the ramp rhythmic groupings here.
Ramp Tree proof of concept is working now! These stupid sine chirps are a beautiful sound this morning after days of work.
What's going on: an external phasor signal (period ramp going 0-1) is going into the Ramp Tree, and the Ramp Tree is dynamically subdividing the phasor signal by analyzing and resynthesizing a new version.
To use some musical terms, what comes in as a signal pulsing at quarter notes comes out shaped as a 3/4 rhythmic phrase (1 measure): 2 eighth notes, a quarter notes, followed by 3 eighth note triplets.
The next step is to use these ramps as alpha values for linear interpolation. Then it'll be a real gesture!
I teach computers how to sing.
Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.