Show more

1/60s loop with RMS low pass filter set to 60/32Hz. video is the fake-- algorithm (which I think solves the operator not the operator) I found online and implemented in

1/10s loop with RMS low pass filter set to 10/32Hz, MIDI notes for filters from 24 to 72, tanh() on each band after gain. sounds more mellow than the version described previously.

New audio piece: "Harmonic Protocol".

Stereo feedback loop 4secs long, with left and right blended with a rotation matrix

Inside the loop, compute RMS energy per semitone per channel via a bank of biquad bandpass filters (q = 17.310) from midi note 24 through 96, accumulated modulo 12.

Inside the loop, scale each individual semitone by the energy of the octave accumulation 7 semitones away (pick a direction).

Inside the loop, apply strong dynamic range compression to normalize level to ~1.

claude boosted

I'm teaching about #Audacity for the first time tomorrow, so I've got two questions for you:

* Which is the feature you think everyone should know about?
* What is a rad public-domain / CC audio recording we should use?

#floss #audio

comparing eigenvectors of harmonic operator (Laplacian) and biharmonic operator (Laplacian squared), sorted by magnitude of eigenvalue (smallest first)

they're close, but not the same

link to stroboscopic youtube video 

Using to read from a file, with a backwards seek to do analysis with , works great!

However, trying the same trick with an input fails miserably: frame rate drops from 40+ to 3 after some minutes of audio-file time :( I conjecture (based on and logs) that it is seeking back to the start of the file and re-decoding the whole thing on every seek backwards a little bit. Unsustainable.

Workaround: to decompress beforehand.

link to stroboscopic youtube video 

The eigensystem approach is reasonably performant (13m40s for 1000 images) but doesn't actually solve what I want to solve, which is a virtual plate being stimulated with a stereo audio signal pair to make a music video.

each frame has lines at the nodes (non-moving points) of an of , successive frames have decreasing .

Implemented in using its eigensystem solver. I used a 5x5 kernel for the operator, based on the 3x3 Laplacian kernel convolved with itself, not 100% sure that this is the correct way to go about it but results look reasonable-ish.

claude boosted

If you are in with Asso Apo at Stour Space
Electropixel - London: Electronoise night
Saturday, 13 July, 8pm - til late
Book Tickets in Advanced! trybooking.com/uk/book/event?e

there should still be £15 tickets on the door for tonight's but advance tickets may be a bit cheaper (available until about 6:30pm according to the website)

algorave.com/elephant/

I'm on early, around 8pm

claude boosted

June 20th

facebook.com/events/3318283907
tickets.partyforthepeople.org/
residentadvisor.net/events/126

> Algorave+friends return to Corsica Studios for a two room featuring: Lil Data (PC Music) // Heavy lifting (Pickled Discs) x Graham Dunning (Fractal Meat) // Miri Kat (Establishment) // Deerful // Hard On Yarn Sourdonk Communion (Hmurd x peb) // Class Compliant Audio Interfaces x Hellocatfood (Computer Club/Keysound) // Digital Selves // Mathr // xname // BITPRINT // Deep Vain // Hortense // Tsun Winston Yeung // +777000 // Coral Manton // Rumblesan + more TBA

the bot works by iteratively zooming in. given a view, it computes a bunch of randomly zoomed in pictures, and replaces the view by the zoom that had the best score.

currently it computes 16 zooms at each iteration, and does 10 iterations. ideally the score gradually increases, but that doesn't always happen.

the final iteration's hiscore is rendered bigger.

the motivation is to have images that have a wide variation of regions of differing local fractal dimension, which I hope would give interesting textures

unfortunately it tends to go dark and stretched quite often...

oops I used 10% and 90% not 25% and 75%, avoids getting 0 scores to often

working on a to automatically explore for

the main problem is devising a fitness function, I came up with this:

1. for each pixel in the fractal, compute the local box dimension of its neighbourhood. use the gray value of each pixel as its measure. use square neighbourhoods of radius 1,3,7,15,..., with simple linear regression to get the slope of the log(measure)/log(neighbourhoodsize) graph

2. compute a histogram of all these dimensions (I simply sorted the array). then take as fitness metric the difference between 25% and 75% through the array: this is typically the width of the central bulge in the histogram.

I came up with this after skimming "Multifractal-based Image Analysis with applications in Medical Imaging" master thesis by Ethel Nilsson www8.cs.umu.se/education/exami , viewing the dimension image in geeqie with histogram overlayed was interesting. also inspired by mrob.com/pub/muency/deltahausd

Show more
lurk.org

Welcome to post.lurk.org, an instance for discussions around cultural freedom, experimental, new media art, net and computational culture, and things like that. This is part of a family of services that include mailing lists, group chat, and XMPP.