@paul I'm kind of curious about VST development, it seems like a fun way to get into some DSP stuff. Should I just start going through JUCE tutorials? You seem like you would have opinions here.

Disclaimer: I don't actually build plugins. But I've worked with many architectures in the past, including VST

Fuck JUCE. It's going to burn you in the long run. It's has overreaching complexities and a sketchy license. I'd only recommend it as a fast short term option for people solely interested in making commercial products to sell to other people. It's good at that.

I'd use the VST format directly, rather than learn a wrapper. That's a better long term solution. The new SDK (VST3?) is now FLOSS. At this point, I imagine there's some great simple examples to get you started.

A thing that surprised me starting out was how VST (or really any plugin) development has a ton of boilerplate code (and that's not even talking about UI stuff). The pure DSP code is often quite terse by comparison. Something to keep in mind.

It's helpful to learn about dynamically loaded code. On Linux, that's things like libdl. Learning how to compile a C function into a shared object file, and then dynamically load + call that function from another compiled C program really made some things click for me, and took some of the mystery out of how plugins work.

Starting out, I got a high bang-to-buck ratio writing offline audio DSP code using libsndfile (specifically, modifying the sfprocess.c example they include in the libsndfile source). It removes headaches like threading, realtime audio pains, and GUI development. It's also way way easier/faster to debug too! It's definitely a different headspace, but it can be a rewarding process, especially when you start to do things that can't work in realtime easily. @hecanjog does a lot of this, and can probably say more on this craft than I can. Personally, I attribute all my early success to not worrying about realtime and just sticking to writing WAV files.

No matter what you do, the real trick is to connect what you build to your already existing workflow/music ASAP. It doesn't even have to be all that clever either. Even a simple filter or tone generator can be a fierce musical weapon in the right hands...

@paul This is amazing info, thank you! I feel a little overwhelmed at the moment but when I get to dig into this a bit more I am very excited to 🎼


@jcmorrow I tend to spew words.

Digest, and feel free to come back with questions/concerns if you have any :)

· · Web · 1 · 0 · 0

@paul I think even just the idea "hey just write a WAV file" is really nice. With visual stuff it's kind of obvious when you're starting out that you should do something with stills rather than real-time, but with audio that just seems like the default for a lot of tools. But I remember how nice it felt when I realized I could just render ppm files with my raytracer and it felt *so* liberating.

@jcmorrow That's a great analogy. You can do a similar thing in the audio world by writing floats to disk and then using sox to convert it to an audio format. Libsndfile is pretty painless though to use. When I was first starting out, it was the only thing that stuck for me. All the fancy iOS/plugin stuff available at the time was too confusing for my brain. Still is a bit, if I'm being honest.

Sign in to participate in the conversation

Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that.