Hi. I'm Lucija. I'm from Paris, France, I'm a developer/creative coder and I like to do researches on many scientific and creative fields. I speak French, English and I'm learning Spanish and Toki Pona.
I manage a streaming platform/netlabel/TOPLAP node called @neuvoids and Microparsec, a Discord server around arts and code. (https://discord.gg/td5ZU8A)
I'm on the fediverse since 2017.
Getting back to my fbm experiments, based on the code found in the book of shaders: https://thebookofshaders.com/13/
Instead of shader code, my version runs on the CPU using ANSI C, with some hobbled together vector functions. The frames get encoded into h264 video via the x264 interface, then wrapped into an mp4 container via ffmpeg.
Syncing this with sound comes next.
Just finished up my initial implementation of f-table lists in #monolith, which are essentially arrays of soundpipe tables. It's a simple and long overdue means to have ftables change over time. Think chord progressions and stuff like that.
Here's the wiki stub on it with a sample runt program (with a link to the woven ftlist program at the bottom):
The relevant code itself in monolith:
And, oh heck, I've also uploaded some bloops that bleep using ftlists and ftlist accessories.
Two FAUST goodies I found this morning.
One is an implementation of the 1176 compressor unit. My favorite hardware compressor (back when I had access to a studio with one). I wonder if it can emulate "all button" mode: https://faust.grame.fr/doc/libraries/index.html#co.limiter_1176_r4_mono
I also found this implementation of the Keith Barr reverb topology done in FAUST. I've been wanting to do exactly this for about a year. Glad to see it's already been done: https://github.com/coreyker/KBVerb
MA propagandists are equating right to repair with rape
i'm retiring an older collaborative project that never quite got off the ground... BUT in doing so we're releasing the tiles we drew for it into the creative commons. 281 free, hand-drawn isometric tiles, to use in your projects:
@paul I think you would love James Tenney's Meta-Hodos and Meta Meta Hodos if you haven't read it already. It was his pass at creating a music theory text from first principles: https://monoskop.org/images/1/13/Tenney_James_Meta-Hodos_and_Meta_Meta-Hodos.pdf
So, how does this relate to computer music?
Western Music theory, up until the late 20th century, naturally assumed human performers and human audiences. With computer music, it's computer performers and human audiences.
The trick with this translation is decoupling the audience from the performer.
Perhaps, it starts with these series of questions:
What does it mean for a computer to compute?
How does the nature of computation relate to musical performance?
How can computational musical performance be relatable to our collective human perception of sound?
Then, it's just a matter of retro-actively applying voice->melody->counterpoint->harmony with this new context.
It would begin with what I consider to be music theory first principles. I think this is taught incorrectly in institutions. This causes people to fixate on the wrong things, which leads to poopy-sounding computer-generated music.
Western Music Theory is centered around one instrument. Piano you say? BZZZ! WRONG. Don't let Big Piano fool you. It *all* starts with the human singing voice.
It all begins with melody + plainsong/gregorian chant. Intervals, step-wise motion, etc. Basically, what goes into a making a good melody.
From one melodic line comes multiple melodic lines running at once. This is known as counterpoint. The challenge to solve here is how to get many monophonic sounds playing well together.
From counterpoint, the concept of harmony and harmonic structure naturally falls into place.
Sound, music, computers, etc.
Human being, being human.
Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.