Show newer

Follow-up question: is there anything in particular people would be interested in reading about? I probably won't have time to go over anything in too much detail, but I'll be covering a lot of ground from soundalike production analysis to DSP algorithms to coding. And yeah maybe a dash of music theory too.

Show thread

Would anyone be interested in a blog series that documents my attempts at making a synthwave track from scratch?

By from scratch, I mean no DAW, using homebrew softsynths and systems I created using a mix of C and variety of embedded scripting languages (lua, scheme, etc).

So, not just typical production analysis, but also consideration of the underlying DSP techniques used. And code, too.

patchlore boosted

This month I think I'm going to participate in the Libre Music Challenge. But I'm going to do it all using my own open source tools like sndkit and gest.
linuxmusicians.com/viewtopic.p

The theme is 80s newwave/synthwave, using only 'wavetable synths' and drum samples. Any effects are encouraged.

I've never made a synthwave track before. If you have any faves, please share!

There are countless ways to produce a line going from point A to point B. In my Gesture Sequencer, these are known as behaviors.

Currently, there are a little over half a dozen different behaviors, including step, linear, exponential, bezier, and a "glissando" mapping designed for pitches.

The example below is an FM oscillator whose pitch is being controlled via a gesture going between two notes. The jumps between the notes cycle through the various behaviors available.

The cycling is made possible through a new thing I made today called 'metabehaviors', but that's all I'm going to say about that...

patchlore boosted

Moral of the story is: you need to consider the interpretation layer of melodic content for computer-performed music.

In a theatrical performance, it often happens that the way something is said has equal or greater value to the words themselves. Prosody. The same holds true for music.

Hows, not whats.

pbat.ch/wiki/howyousay/

Show thread

Here's that same melodic phrase with the temporal weight disabled. The melody falls apart! Without meaningful tempo fluctuations to shape the phrasing, the melody turns into this meandering chromatic mess.

Show thread

Here's the same melodic idea with another metatarget nested inside of it! Not only is the melody more complicated, but I've also made sure the new targets have constrasting temporal weights, which then automagically get sorted out.

beg 1 4
t 0 sg
t 2 sg
t 4 sg
mt 5
mt 2
t 7 sg mass 60 inertia 0.1
t 10 sg mass -30 inertia 0.05
t 9 sg mass 120
t 11 sg mass 100
t 9 sg mass 40 inertia 0.01
t 8 sg mass 0 inertia 0
end loop fin

Show thread

Now, it just so happens that this metatarget is going through the targets in sequence. There's nothing stopping me from making it randomly choose the targets, making it 'generative' music. The temporal weights would still be there, so I'd still have to consider which direction I'd want the tempo fluctuations to go in: speeding up or slowing down. It's all a matter of musical interpretation. I like that that important little detail isn't suddenly thrown away.

Show thread

The temporal weight being used to bend tempo can be inverted in 2 lines of code. So, instead of speeding up on the higher notes, it slows down.

This would be the mirror image rhythmic phrasing. What's eerie to me is that even though my musician brain can hear the differences in approach, semantically they feel weirdly identical in shaping the intent and phrasing of the melodic line. Maybe it's because the embedded gestures created are identical except for polarity and mapping.

Show thread

Metatargets added to Gest, my gesture sequencer! Now it's possible for individual targets in a phrase to change each time they are played.

As an example, here's a gesture being used to control the pitch of an FM oscillator. It is a 4 note looping sequence, with the last note being a metatarget switching between 5 notes. Each of these notes is programmed to manipulate the global tempo in different ways (temporal weight), and as a result you implicitly get very natural sounding tempo fluctuations corresponding with the phrasing.

The code to program the gesture looks like this:

beg 1 4
t 0 sg t 2 sg t 4 sg
mt 5
t 7 sg mass 60 inertia 0.1
t 9 sg mass 120
t 11 sg mass 100
t 9 sg mass 40 inertia 0.01
t 8 sg mass 0 inertia 0
end loop fin

patchlore boosted

This goes back to the same conversation I'm always having: We must liberate the media.

Our culture, our stories and songs and myths and legends, can't belong to corporations.

The music industry has committed itself to this death spiral. Slowly starving itself in the name of increased value for shareholders, until such a time as modern culture has left the aging industry behind.

They will lose, in the long term.

We can expedite that loss, the DMCA can be our weapon.

Show thread

Everyone talks about scale in software development. Why not arpeggio?

patchlore boosted

Looking to share and connect with #smalltech #lowtech #craftisans #makers #creators and those that like to tinker, play, explore, learn, show, teach, and make the world a better place one day at a time.

#introduction
#introductions

Show thread

"Living with an idea" by @hecanjog

gemini://hecanjog.com/blog/202

> The intense work in a short period is easy to understand and digest, but the doing nothing for sometimes quite a long time is more difficult to grok.

I also think about how the holodeck programs seem to be created. More or less by voice dictation and NLP from the looks of it. Could that be driving a distant relative of copilot, trained on the data trained on the data trained on the data trained on the data by things like stackoverflow and GitHub?

Show thread

What is the ultimate outcome of something like copilot, allowed to be pushed to its fullest extremes over a long period of time? My creative guess is it will turn programming into prayer with the strange specificity of legalese.

Because these projects focus only on the endpoints, the result is always the same lackluster sound.

Show thread
Show older
post.lurk.org

Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.