the concept: "gesture" is fancy name for a control signal. a set of coordinated, synchronized gestures are used to manipulate a sound object. This produces a "gestured sound". A well mapped moving gesture will produce something that also sounds like movement... a sonic gestalt. gestalts work together to produce a perceptual cue that our brains register as a musical "note".
The origins of a music "note" are well... notes. Notes get interpreted by human musicians, which in turn produce (body) gestures on musical instruments. Gesture here aims to crudely emulate this layer of musical performance with computers.
Implementation: a Gesture in practice is a glorified breakpoint line generator with variable slopes.
A common problem with lines is that many instances will eventually drift apart. What do? Synchronize them to a master clock. This has lead to something I built called the "gesture signal generator".
A specialized clock signal known as the "conductor" drives the timing of one or more GSGs, and programmed like a breakpoint line generator with timings relative to the conductor signal. When the conductor changes tempo, the GSG will automagically adapt without needing to actually know the tempo itself. It turns out I re-invented an EE concept known as a "phase-locked loop". This, plus an interpolator, and some way of iterating breakpoint values, is the core primitive that makes my Gesture possible. In other words: breakpoint line generators that are impervious to drift accumulation over time.
Realized Structures: With the Gesture Signal Generator, everything is a signal, and can be thought of as a topology such a DAG or signal flow chart. This leads to interesting things.
A few constructs I've invented so far include Temporal Weight, Temporal Skew, Modulated Conductors, Articulated Gestures, and Gestured Metronome with Segmented Ramp.
@phetre never heard of this before. To be honest, I'm having a hard time wrapping my head around all that math. But it looks interesting!
I guess you could generously call my efforts "applied DSP": I needed better lines for my sound objects, here's what I rigged up so far.
@paul Sounds good. I don't understand the Mazzola stuff either (yet; I haven't given up hope!), but I suspect there might be a similar intuition: gestures as mappings from the unit interval? Not sure. Anyway, can't wait to see your blog!
@paul The idea sounds really cool, but I don't think I understand it that well (I don't have much background in signals). In this case, is gesture some combination of timbre, articulation, dynamics, i.e. everything that's not the actual note that musicians choose to play? or does it include the note too?
@brycew maybe this theremin performance of the swan by Clara Rockmore would help clarify things:
What's interesting about the theremin is you only have control of pitch and amplitude during a performance. No timbre control*. Yet despite these limitations, it's a very compelling performance. Why? The combined continuous gestures of the left/right hand form what we hear as notes, played with virtuosic phrasing. In other words, the note appears when all the gestures move together to articulate it.
It's reasonably straightforward to build a virtual theremin inside a computer. It's just a single oscillator with freq/amp control. Getting those controls to move in a lyrical way like how Clara does it (the "performance") is a more difficult problem, which is what my systems attempt to approach.
*: There are tone knobs on theremins that change timbre, but they are something you set before the performance not during.
We are an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.