Show more
patchlore boosted

Hi. I'm Lucija. I'm from Paris, France, I'm a developer/creative coder and I like to do researches on many scientific and creative fields. I speak French, English and I'm learning Spanish and Toki Pona.
I manage a streaming platform/netlabel/TOPLAP node called @neuvoids and Microparsec, a Discord server around arts and code. (https://discord.gg/td5ZU8A)

I'm on the fediverse since 2017.

:triangle: :tridot:

patchlore boosted

Seeing the Zen/Zig drama makes me glad nobody actually uses my open source software.

Here is the code and Makefile: pbat.ch/wiki/fbm/

It is written in ANSI C and uses the C API and FFMPEG to generate the video.

Show thread

Getting back to my fbm experiments, based on the code found in the book of shaders: thebookofshaders.com/13/

Instead of shader code, my version runs on the CPU using ANSI C, with some hobbled together vector functions. The frames get encoded into h264 video via the x264 interface, then wrapped into an mp4 container via ffmpeg.

Syncing this with sound comes next.

patchlore boosted

Albatrosses are glam birds.

Drawing for the latest 100r video.

#krita

Just finished up my initial implementation of f-table lists in , which are essentially arrays of soundpipe tables. It's a simple and long overdue means to have ftables change over time. Think chord progressions and stuff like that.

Here's the wiki stub on it with a sample runt program (with a link to the woven ftlist program at the bottom):

pbat.ch/proj/monolith/wiki/ftl

The relevant code itself in monolith:

git.sr.ht/~pbatch/monolith/tre

And, oh heck, I've also uploaded some bloops that bleep using ftlists and ftlist accessories.

The bittersweet joy of writing software that is "good enough".

FAUST goodies 

Two FAUST goodies I found this morning.

One is an implementation of the 1176 compressor unit. My favorite hardware compressor (back when I had access to a studio with one). I wonder if it can emulate "all button" mode: faust.grame.fr/doc/libraries/i

I also found this implementation of the Keith Barr reverb topology done in FAUST. I've been wanting to do exactly this for about a year. Glad to see it's already been done: github.com/coreyker/KBVerb

patchlore boosted

Found this super endearing

The IDE for the CS50 online course has a built-in duck for rubber duck debugging

patchlore boosted
patchlore boosted

i'm retiring an older collaborative project that never quite got off the ground... BUT in doing so we're releasing the tiles we drew for it into the creative commons. 281 free, hand-drawn isometric tiles, to use in your projects:

https://withering-systems.itch.io/city-game-tileset

patchlore boosted
patchlore boosted
#Happy1600000000 seconds since 1970-01-01!!

$ date -d @1600000000 +%FT%T
2020-09-13T12:26:40

That's half an hour ago if you're counting in UTC. The next even hundred million seconds will be in November 2023, so it's not a super rare ocurrence. After all, it's happened 16 times in the 50 years since @0. 😀
patchlore boosted

@paul I think you would love James Tenney's Meta-Hodos and Meta Meta Hodos if you haven't read it already. It was his pass at creating a music theory text from first principles: https://monoskop.org/images/1/13/Tenney_James_Meta-Hodos_and_Meta_Meta-Hodos.pdf

So, how does this relate to computer music?

Western Music theory, up until the late 20th century, naturally assumed human performers and human audiences. With computer music, it's computer performers and human audiences.

The trick with this translation is decoupling the audience from the performer.

Perhaps, it starts with these series of questions:

What does it mean for a computer to compute?

How does the nature of computation relate to musical performance?

How can computational musical performance be relatable to our collective human perception of sound?

Then, it's just a matter of retro-actively applying voice->melody->counterpoint->harmony with this new context.

Show thread

It would begin with what I consider to be music theory first principles. I think this is taught incorrectly in institutions. This causes people to fixate on the wrong things, which leads to poopy-sounding computer-generated music.

Western Music Theory is centered around one instrument. Piano you say? BZZZ! WRONG. Don't let Big Piano fool you. It *all* starts with the human singing voice.

It all begins with melody + plainsong/gregorian chant. Intervals, step-wise motion, etc. Basically, what goes into a making a good melody.

From one melodic line comes multiple melodic lines running at once. This is known as counterpoint. The challenge to solve here is how to get many monophonic sounds playing well together.

From counterpoint, the concept of harmony and harmonic structure naturally falls into place.

Show thread

A part of me is tempted to write some sort of western music theory textbook, but with the concepts re-imagined to work better with the computer medium.

Sporth/Soundpipe are now *only* hosted on sourcehut:

git.sr.ht/~pbatch/soundpipe

git.sr.ht/~pbatch/sporth

Sporth has been relicensed to use the unlicense: unlicense.org/

Soundpipe collectively is still MIT, but I've placed public domain notices for many of the individual files.

Show more
post.lurk.org

Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.